Hacker News new | past | comments | ask | show | jobs | submit login

If they were "entirely" ethical (which is a silly concept but it's already been deployed on this thread so I'll run with it), they'd be more up-front about the features and limitations of their security model. So:

* Modern phones (and all the flagship phones) have had separation between their basebands and APs for years; a modern smartphone baseband is essentially a USB peripheral.

* The two largest smartphone vendors have large, world-class security teams that do things like audit their basebands. Has Purism?

* A modern flagship smartphone will have some kind of secure enclave. Apple's has dedicated silicon, and an encrypted memory bus linking it to the AP. How does Purism's hardware security model compare?

* I don't know how much Apple and Google spend annually on outside security research for their flagship phones, but it's a lot. Who has Purism engaged to evaluate their designs and spot flaws?

If you want to use a niche phone as a fashion or political statement, more power to you. But if you try to market that phone as "transparent code is the core of secure systems", I'll take issue with that; it's neither a necessary nor a sufficient condition for security.

This phone may very well be more "fair" or "ethical" than an iPhone. But if it's not as secure as an iPhone, it's unethical to claim otherwise.




> a modern smartphone baseband is essentially a USB peripheral

Since when? I'm going to need evidence for a claim like this -- as far as I was aware, Apple is the only one that does this, because they make their own SoC and then use a physically separate Qualcomm/Intel(?) modem.

All the other flagship smartphones use a Snapdragon eight-something with everything on a single die. How could you possibly claim with a straight face that it's "only" (really, truly, pinky swear) a USB peripheral, with no other route to main memory, which oh-by-the-way is also on-package?


Update: as far as I can tell (qualcomm datasheets are annoyingly hard to come by), everything was still over PCIe as of early last year. Their modems are getting faster, not slower, so I would be surprised to see this change to USB.

PCIe would give the baseband full access to main memory unless the mmu is set up correctly (there's lots of evidence that phone manufacturers have been lazy about this), and also assuming that the mmu based protection actually works correctly, which is next to impossible to verify.


I think you're understating how long IOMMUs have been in common use on SoCs. It's not new tech.

(And I think tptacek's "essentially a USB peripheral" might be intended to be read as "metaphorically a USB peripheral" -- i.e. with only the capabilities you'd expect a USB peripheral to have, even when hooked up over PCIe, because of the IOMMU.)


Have you seen how these smartphone manufacturers handle software development? The majority of them bring up the hardware just barely enough to make it look like all of the peripherals work, then they ship it.

They're not even going to bother configuring the iommu, because there won't be a perceivable difference in behavior to their boss or to the average user. It has nothing to do with how long the feature has been around, their priorities are just fucked.


Which manufacturers are you referring to? When I said "flagship phones", I meant the iPhone and the Pixel.


I meant Samsung's $700+ phones, and maybe LG's $700+ phones, the only mainstream competitors to the iPhone. Normal people still don't buy Pixels.


I am fine with saying that this Librem phone will be as secure as the best Samsung and LG can do.


I don't know about the US, but in Germany nobody except nerds cares about Pixel phones. I think I have only once seen in the wild with friends and acquaintances.

You're either an iPhone guy or a Samsung Galaxy guy.

Or you want to save money, than you probably have a OnePlus or maybe a Huawei.


Wouldn't say the nerds care about Pixel phones. Google's older Nexus phones were and are really popular, Honor isn't too rare, and yes - OnePlus and Huawai. Too bad HTC seems to lag behind.

I'm personally hoping my 5X will survive and be updated for a long time because I haven't anything in my preferred price range that speaks to me in a long time.


I don't know what information Apple collects, but Google is definitely the opposite of ethical. If you have Google Play Services, then they continuosly ask you to enable track your location, track what cell towers and WiFi networks are around and so on.


And even with your location OFF, it still continues to collect location data [1] and build a detailed tracking map [2]

[1] http://www.eweek.com/cloud/google-location-tracking-continue... [2] https://www.howtogeek.com/195647/googles-location-history-is...


And on Android 5, every time you enable GPS, a popup appears asking you to let Google track your location [1]. If you tick the "Don't ask anymore" checkbox, the "Decline" button becomes disabled so that you don't make the wrong choice.

[1] https://android.stackexchange.com/questions/115944/in-lollip...


On the iPhone, it's literally a USB peripheral, unless it changed recently; it's connected to the AP via HSIC.


The iPhone is probably the only major device where the modem is a USB peripheral. Qualcomm, Samsung, and Meditek all offer DMA between their radios and ARM cores.

I certainly don't trust any of those players to even ship relatively secure or even modern code to boot the ARM cores, let alone properly protect memory.

With the Purism stance of ensuring the baseband is on separate silicon, on a relatively limited (and easily hardware isolated) bus, they're able to effectively treat the baseband as an untrusted device.


Can it be that they just allow modem to use the same RAM and ROM that CPU uses, so that they don't have to install another RAM chip?


Yes/No/It's Complicated.

If when you say "ROM" you're talking about the persistent storage holding the modem's firmware, then absolutely -- that firmware lives in a "vendor" partition on the same emmc as the OS.

When talking about "RAM", things get a bit more complex. Most usb and even pcie modems are going to ship with enough built-in (that is, on the same die) RAM to get the job done. They have absolutely no need for the quantities or qualities of external DDR (where your OS is running from most of the time).

That said, when the modem is embedded in the SoC, all bets are off -- it's really convenient to share as much as possible, and is entirely possible for the modem cores to depend on the same DDR memory as your ARM cores.


On Qualcomm SoCs, a chunk of the memory area is reserved for the baseband, and another section is reserved for TrustZone kernel, but it does the same as an integrated GPU in a PC. Then it connects via a pseudo tty with a kernel driver, but as the baseband has more privilleges than the main cpu, it really could do whatever it wants, just like TrustZone.


It can share the CPU's RAM chip without being able to read from (the rest of) it, because of the IOMMU.


And Qualcomm would never improperly configure their SMMU:

https://nvd.nist.gov/vuln/detail/CVE-2016-10444

They only sat on this one for two years before fixing it.


In this way, Apple can manufacture 4G and wifi-only iPads by omitting this component on the latter.


I'm pretty sure he's got very little idea about it, but it isn't stopping him from attempting to undermine this project. I find that fascinating.

Five-eyes have just finished re-dedicating to each other that they'll strong-arm and backdoor any popular platform and company product that they can get their hands on. What point is there to your nicely built Apple device then? That's rhetorical; there is no point having a secure device that your government can pry open at it's whim - be it hardware or software backdoored.


> But if it's not as secure as an iPhone, it's unethical to claim otherwise.

This seems silly, you can easily flip this on its head and say "Well the iPhone is closed, so there's no way to verify it has any security". You can't separate the ethics of security advertising from the ethics of everything else about how the device works.


Closed source and unverifiable are more orthogonal than not.


I agree--after all, security is generally unverifiable anyway, even with perfect visibility.

However, in the context of ethical security conversations, the two are anything but orthogonal. The security expectations of someone buying an open phone, and the security expectations of someone buying "private" access into a proprietary platform, will be different. Sure, my baseband could be compromised, but if I'm more worried about leaking information to ad networks, what I really need is a firewall and hosts file, not an isolated baseband unit. If we're talking about an ethical security conversation, maybe we should mention the fact that this is the security Apple has to control what code runs on your phone, not the security a user has to control the flow of their data. Though, presumably you can have both goals, Apple shows interest only in the former.

My point being the two phones are extremely different products, especially regarding ethics, so the straightforward comparison and verdict struck me as extremely narrow-minded about how people use their phones.


I don't understand the hand-waving here. Does the phone have a comparably secure design to that of an iPhone or not? What other question is there to ask about the security of the phone?

I bring it up because the project makes a very big deal out of how much security their ethical approach adds. From what I can tell, their approach nets them materially less security than either flagship phone.


The security of an iPhone is completely dependent on what Apple decides to do with iCloud and with the applications that come from the app store. Sure, Apple may be more secure against $random_blackhat, but the user still isn't in control of what is happening on their phone.

Most users care much more about the decisions that Apple and Google make with their data than they are about $random_blackhat.

Even more, a lot of the security features you mentioned are very difficult for an open source phone to achieve because the hardware ecosystem is so endemically closed source and proprietary. We can fix that by pushing hard for open source hardware that starts to make inroads and break those barriers down.


I generally value tptacek's comments on security matters, but this position doesn't make sense to me.

When Apple decides to take away an app I paid for, I have no security to block them. When Apple decides to quit posting security updates, I'm out of luck. When Google reaches in and takes my location data without permission, I can't stop them.

With open software, there's no guarantee I'm more secure. But when some security issue does come to my attention, I'm more likely to have some say in how I respond. Somebody claims to have a more secure or privacy-respecting driver? Strongly recommended by experts I trust? I'll look into it. With Apple, I can't. There is no such alternative.

The ssl lib has known vulnerabilities? Wait for Apple? Or install some shim until there's a fix? Oh wait, on Apple devices, you don't have an option.

Apple is not only closed. It's walled. As the owner I am kept on the wrong side of the wall. And there are parties on the other side of the wall that I don't want to be on that side.

This is a security model that is hard to lose to. If Purism let's me control the wall, I'd say that's a welcome change.


And I generally vehemently disagree with tptacek and most of what he stands for, but he's correct here.

> When Apple decides to take away an app I paid for, I have no security to block them. When Apple decides to quit posting security updates, I'm out of luck. When Google reaches in and takes my location data without permission, I can't stop them.

This is not security. This is some weird definition of security you're using here.

> With open software, there's no guarantee I'm more secure. But when some security issue does come to my attention, I'm more likely to have some say in how I respond. Somebody claims to have a more secure or privacy-respecting driver? Strongly recommended by experts I trust? I'll look into it. With Apple, I can't. There is no such alternative.

Software security doesn't much matter with phones when the hardware security is crippled or non-existent.

Software security means fuck-all if you can pull from the phone's memory via a cable, via the modem, or via JTAG. The iPhone is the most secured hardware of any phone on the market and by quite a long shot.

Reading through the comments here is frustrating because tptacek seems to be talking about hardware and all of the responders are talking about software. It's not like your personal computer where you "need to get physical access in a powered on state". Getting access to an adversary's phone is _beyond trivial_.


"And most of what I stand for"?


Let's sidestep the politics today, eh? We're doin' good. Little bit of hyperbole there.

I'm just trying to draw emphasis to your point being sound and not for HN personality-cult reasons.


> I don't understand the hand-waving here. Does the phone have a comparably secure design to that of an iPhone or not?

To understand means to see beyond security design to see security principles. If one company magically decides to move your data to a government controlled cloud without any public discussion around why or if they even agree with it, the design is not going to matter. When the next internal decision occurs negatively affecting security in some undiscussed opaque way, you won't be able to point to security audits to explain the folly of human traits.


Sure, it can be compared, but it’s not a one-dimensional comparison to my eyes—you’d have to pick a dimension of security to compare to get a proper ordering.

For instance in this case it seems like the iPhone has (considerably) more hardened hardware for many scenarios where you lose control of your phone. That’s not the only sense of security people care about. From the perspective of securing the phone from doing unwanted things without your permission, you have to trust Apple entirely to make the right decision. If you don’t trust Apple to make these decisions, the phone does not reflect a secure interaction.


How can they be orthogonal when eventually you need open software of some kind to make a verification?

If the software I use to verify the closed source software is closed, then I will need another piece of software to verify that the verify-ing software is what it says it is and so on until we reach a piece of software that I can verify with my own eyes (or a friend has verified with their eyes).

We all eventually have to place our trust somewhere but we shouldn't have to agree with where you decided to trust.


If you didn't trust the closed source disassembler you use, for whatever reason, you would verify the assembly output, not the actual software. In practice this is often done unintentionally anyway: it's common to run a debugger (ex: gdb, windbg) alongside an annotating disassembler (like IDA). objdump (from GNU binutils) also supports many non-Unix executable formats, including PE (Windows) and Mach-O (OSX/iOS).

For fun I just compared objdump's entry point function assembly of a downloaded Pokemon Go IPA to the Hopper representation, and, no surprise, they are identical.


This is not a position widely held in the software security industry.

Quite the opposite it’s axiomatic in the industry that you need capabilities to verify closed source.


You don't need open software of any kind to verify it.


It sure helps though. And being able to compare source code to binary is good for checking blobs.

No one knows what nasties are hidden in these blobs that run on EVERY phone out there. I’d actually go as far as saying cell phone processing modules are completely insecure. Unlikely to receive updates. Poorly documented. Security standards are likely to be weakly implemented with gaping holes.

As for proof, I’d point at the lack of source code as a starting point. Open source doesn’t guarantee security but it at least lets interested parties try to the degree they want or need to.


As for proof, I’d point at the lack of source code as a starting point.

Much better proof would be frequent exploits through the vectors you consider examples glaring insecurity.


Sure. Go and have a look at GSM and friends. I’d say the cellular network in general is routinely being manipulated. The protocols and standards for over-the-air are demonstrably insecure and assuming the actual hardware is also insecure is reasonable as well.


Part of the point of this thread is that 'modern flagship phones' are designed with problems like that in mind. You're the one claiming to have 'proof' they are trivially insecure.


I pointed at the lack of source code as a starting point for proof of complete insecurity. I pointed at the ease of the existing protocols in active use as an example of that insecurity. That insecurity is the basis for Stingray fake towers. If you can fake the tower then the cellular modules can’t be much better.

I’m sure various agencies are quite frustrated by their inability to use the cellular modem as an entry point into Apple’s phones. That by itself is another pointer.

I didn’t claim to have proof other than that.


That's proof for the values of proof that include 'innuendo'.


You haven’t addressed any of my points to any particular depth. So I’m not sure why you have such faith in this tech.


You shouldn't flip things around, because there is observable evidence beyond source code. Not knowing everything about the world doesn't mean we know nothing.


Not knowing everything means there could easily be some unknown element that renders our knowledge moot.


You can observe it though, iphone's jailbreak community is most of the time 2 and at best 1 patch behind iOS.


The fact that there is still the potential for jailbreaks in iOS surprises me and doesn't seem to help the secure-by-default argument.

Why is a trillion-dollar company which designs their own silicon only two steps ahead of some hackers? Instead of $200 billion in the bank, unspent, shouldn't those devices be reviewed and redesigned until impregnable?

Computing devices are usually insecure due to limits on experience, time and cost. None of those applies to Apple in any meaningful manner.

Personally I'll just stick with cheap Android phones running custom ROMs and treat them as insecure, disposable terminals.


So long there is a hardware input source (usb) it's impossible to guarantee security, this is the reality. There is not a single device with an input source such as USB that is secure. Physical access guarantees lack of security.

Also, it's not some hackers. There's quite a big community out there looking for bugs and every time, such 'hacking' requires unlocked phone.


> Instead of $200 billion in the bank, unspent, shouldn't those devices be reviewed and redesigned until impregnable?

The problem is the Mythical Man-Month. Apple's devices are built on a mountain of ancient C code. XNU is a bizarre Mach/BSD hybrid, built for the sake of expedience. (This is not to bash Apple; most other devices aren't much better.)

Apple has been working to improve the situation, but there's a limit as to how rapidly all those millions of lines of code can be changed.


I accept that "transparent code" isn't sufficient in itself to provide a secure system, but saying that it is not necessary is just false.

As governments are pushing more and more to have back doors(or side gates here in Aus) it is becoming very important to have fully open code, as it is the only way to get around the vulnerabilities governments will be requiring phones to have in them.


Realistically the security research community who find things like phone exploits are very well accustomed to a lack of source code, and in many particular niches of infosec -- it's a baseline assumption for almost all analysis work. Apple's security team is the reason the iPhone is an extremely secure device, not because it's closed source, and not for lack of trying. Go ask people who find major security vulnerabilities -- binaries are rarely a true obstacle, time and mitigations are. (And you'll probably come away surprised at how easily most systems fall to even basic intrusive analysis, no source code needed.)

Source code is merely convenient for the security community in cases like this. But it's hardly a requirement at all, and you can absolutely build a secure system if you design for it.

All that said, I think source code for a device as personal as a phone is largely more of a question of ethics and this is what Purism heavily leans on as an angle for their marketing. I think that's a good angle, independent of any security aspects: owning the code and having the right to it (even if you don't ever touch it) is an important aspect of owning a device as deeply personal as a phone. People like us who believe that should totally advocate for it -- I think it's a fairly simple case to make. But it really has little to do with _security_ on its nose, if you ask me.

(Also, if your government mandates backdoors, they will not be thwarted so easily by simply recompiling AOSP or whatever and loading a new ROM, if they do not wish to be. Any belief to the contrary is a fairy tale that people like us enjoy telling themselves. But I'll give you a hint: writing a Makefile cannot get you out of jail, and you are not that good at covering your tracks.)


> Realistically the security research community who find things like phone exploits are very well accustomed to a lack of source code

Realisticaly, though, a manufacturer or intelligence agency will be the one paying security people for thorough reviews. Anyone doing this in the open would do this to build a reputation or for their own security, and will need much more time to review something that first needs to be reverse engineered. They will almost certainly not be able to look at everything.

It's not a logical argument to say that we security people are used to reading obfuscated binaries anyway and conclude that therefore no fewer bugs will be found compared to the situation in which it is open source.


This is one of the rare instances in which there actually is a liquid market for viable vulnerabilities and exploits, so no, I don't think it's safe to say that the only people doing this kind of research are in the IC or vendors.


Apple is the gold standard of mobile security and also a shining example of a closed source, proprietary ecosystem. This is coming from a dedicated Android user who has a preorder for a Librem 5. I did not order it for security reasons, I ordered it for ease of development. If I had sensitive information on my phone, it would be an iPhone. I just prefer living in the wild west.


Well, secure for whom?

If a developer does not reveal the source of a program, then surely a user of this program is not secure against that developer, who can make that software do anything without easily being discovered.

Therefore if we are talking about security from the users perspective, transparent code is definitely not sufficient for security, but it is a precondition.

If you make the (IMO wrong) assumption that a developer on principle can never be a security threat to a user, then sure, you can claim that something non-free like iPhone is "secure". I guess it's secure in that Apple can be very secure you won't use it to run GPL'ed software?


Having the binary is enough if you know the architecture. But therein lies the problem with modern systems - you don’t always know the architecture or have enough access to explore. Intel ME is a great example.


You seem to be totally focused on and upset about security, but I don't know that security is the only or number 1 point of this phone, or they ever claimed security parity with Apple. (Unless Apple or a government with power over them is part of your threat model, of course.) I think it's primarily about respecting user freedoms to control their software, which the iPhone and Android score terribly on. So your post comes off a bit unfair in my view.


I mainly got interested in it because I want a phone with GNU/Linux on it. If somebody want to own me, they'll own me no matter my phone.


Yeah, I'm mostly interested because the software is designed simply to be a useful tool, rather than with Google or Apple's motives in mind (lock-in, advertising).

It will just be a more pleasant tool to use, without crap getting in my way.


It's funny, most people are concerned with security (rightfully so). But my main frustration is the inability to fix the parts of my OS I don't like. It's just convient that most security professionals agree open source is the best way to achieve security as well.


That is not a factual statement about what most security professionals believe.


I think this quote is a bit more accurate:

> So does all this mean Open Source Software is no better than closed source software when it comes to security vulnerabilities? No. Open Source Software certainly does have the potential to be more secure than its closed source counterpart. But make no mistake, simply being open source is no guarantee of security.

From https://www.dwheeler.com/secure-programs/Secure-Programs-HOW...


"The two largest smartphone vendors have large, world-class security teams that do things like audit their basebands. Has Purism?"

(and some other points that, I think, miss the point)

Once the baseband is isolated as a USB peripheral and the radio modules are physically pluggable modules ... and have kill switches ... the appropriate comparison is no longer to other phones. The appropriate comparison is to a PC, like a laptop.

This:

https://puri.sm/wp-content/uploads/2018/06/2018-07-26-dev-ki...

... has the shape of a phone, but that sure looks like a PC to me.

I am happy running a PC without a secure enclave, with a USB cellular modem attached, and with only hazy knowledge of the baseband running on that (in my case) USB stick.


Your PC is much, much less secure than your iPhone is.


All modern cryptography relies on random key generation. But the current state of the art is that there is no way to distinguish between truly random numbers and pseudorandom numbers generated deterministically from a key. So how can we even begin to talk about security without openness? Maybe the key generated in the secure enclave of your iPhone never left that enclave - but maybe the NSA has a copy of it all the same.


Nor Google nor Apple can – to the best of my knowledge – decide what code is run or not run on my PC. That is evidently a plus point for security, if one doesn't blindly trusts these companies.


Thing is, I doubt very much you ensure that your $HOME is safe when that code is run.

And as proven by the set of Linux Security Summit 2018 talks, there is still lots of work to be done regarding security on Linux distributions.


Well, I sandbox closed source apps by running them as a different user that doesn't have access to my $HOME.


That isn't enough to protect from kernel bugs, as discussed at Linux Security Summit, hence kernel self protection project and related initiatives, mostly driven by Google regarding Android security.

If you use Qubes OS then it is another matter.


You are being unreasonable because you equate the hypothetic possibility of malicious code execution by Apple or Google due to bugs with running an OS from either on hardware from them.


You have a completely different security model on PC, though. PC is designed as a minified version of multi-user workstation, where system administrator itself is responsible for securing the system. iPhone, on the other hand, is designed for novice single-user use case, where manufacturer is responsible for securing the system.


Your PC don't store your data on Chinese state-owned servers.


Ethical isn't a silly concept; it sounds like an abstract term, a proposition, while it is actually more nuanced; it is a spectrum, not an absoluteness. And it goes beyond technology.

There's also social ethics which the company Fairphone tries to tackle with their products (Fairphone, Fairphone 2, and the upcoming successor). They're transparent about their ethics [1] which contains examples about sourcing ethical cobalt, the supply chain of the displays, and so on.

[1] https://www.fairphone.com/en/blog/


> But if it's not as secure as an iPhone, it's unethical to claim otherwise.

Do they, or is it a convenient claim to argue against? (honest question as I'm unsure about what they've claimed compared to iPhone specifically). By using volume and money spent as security metrics, it's quite easy to see why large players would be the best.

For many, I suspect that principles drive some of their security assumptions. Companies that are opaque and do things behind users' backs are always going to appear less trustworthy than those that are open about their software and processes. While that doesn't translate to empirical security, it's enough of a paradigm shift from secretive approaches that I hope that facet alone helps them succeed.


>* I don't know how much Apple and Google spend annually on outside security research for their flagship phones, but it's a lot. Who has Purism engaged to evaluate their designs and spot flaws?

At its inception you could make the same argument for most open-source projects which usually don't start decked out with cash. But most trusted and respected projects of today (especially in comp sec) are open-source as a prerequisite.


At least regarding 'most ethical' they are in healthy competition with Fairphone (and maybe e.foundation, and others I don't know of)


Somewhat funny how your argument works the other way around too:

Spending a lot on outside security research for flagship phones is neither a necessary nor a sufficient condition for security.

I like what Purism does, and those firmware blobs are one of the most significant issues I have with current smartphones.


Is any documentation about baseband segmentation in SoCs public?

I'm not familiar with the mobile microprocessor space, so curious about any details.


Utter and absolute b.s. ,the purpose of purism is privacy,not security. They are not claiming greater security than the iPhone or Pixel,they are claiming transparency,open hardware and a non-commercial alternative mobile OS.

The price of having "a large team of ______" is either closed hardware or complete loss of privacy.

It pisses me off so much when I read well reputed people like you say ridiculous things like this. How big was Linux's security team when it started out? Look at duckduckgo and protonmail competing with google search and gmail. Are you saying they won't be able to afford a dedicated security audit team once the product starts profiting in the tens of millions?

> If you want to use a niche phone as a fashion or political statement, more power to you. But if you try to market that phone as "transparent code is the core of secure systems", I'll take issue with that; it's neither a necessary nor a sufficient condition for security.

I will make that fashion statement. And you know what,security is all about risk(and I know you could teach me an entire course on this). As an individual, hardware or software that cannot receive an independent review either due to lack of openness or transparency in the development process carries an enormous amount of risk. Even someone as talented as yourself cannot unilatetally audit android's codebase,you can't even start with iOS(even if you could you 'd be breaking a fineprint somewhere). The fact others can audit the code and design is a relevant factor when assessing risk and evaluating security. More transparency == less risk == better security.

> This phone may very well be more "fair" or "ethical" than an iPhone. But if it's not as secure as an iPhone, it's unethical to claim otherwise.

What happened to threat modeling? As an individual,I care much more about Google tracking my activity and sharing it(I'm sure you read the recent AP piece revealing google doing just that even when location was turned off) than I am worried about russians using a 0 day kernel rce or the FBI trying to decrypt my phone. Many of us are just normal people seeking the basic dignity of privacy and property ownership(as in freedom over the phone you own). Their product has a very real and significant impact to the reduction of the amount of risk I have to accept as an individual who has to use a smartphone, I don't see why you need to bilittle their work with obviously non-constructive criticism.

Edit: to all the '*' points you mentioned,how exactly and practically would purism's lack of those features impact the privacy and security of purism under real world threats. Also,in case it wasn't clear already,Google and Apple are considered threats to privacy and security by many who support works like purism's.


If your phone is not secure against outside, malicious actors, then all the privacy you gain is entirely pointless.

A phone needs to be secure and protect privacy.

Having a security team doesn't mean you get closed hardware, it means you have a security team.

>How big was Linux's security team when it started out?

Not very large and nowadays there is lots of people working on Linux security and they discover a lot of CVE's that are fixed in the kernel and then backported. Early versions of the Linux kernel weren't very well guarded against outside attackers (in fact, we're only just seeing the tailend of when vulnerabilities get introduced to the kernel lift from the beginning of the git history at around 2.6).

>Are you saying they won't be able to afford a dedicated security audit team once the product starts profiting in the tens of millions?

I think, and this is pure speculation based on the text of the OP, that the statement they made is intended to highlight that security teams cost money and Protonmail and DDG aren't able to afford as good security teams as google as a simple function of "how much money can we spend on it".

>As an individual, hardware or software that cannot receive an independent review either due to lack of openness or transparency in the development process carries an enormous amount of risk. [...] More transparency == less risk == better security.

Sing along kids: "It all depends on your threatmodel!"

But seriously, this conflates "open source = security" which isn't remotely true and can be easily disproven by adding a backdoor to any open source project and sending you the result. How many people out of 100 would even understand the code, how many of those are able to find the backdoor, how many of those can patch it out and compile it without backdoor?

Open Source gives security for people who understand code, not for the people I meet at work that have difficulties opening Facebook if you remove it from their homepage setting.

>What happened to threat modeling? As an individual,I care much more about Google tracking my activity and sharing it(I'm sure you read the recent AP piece revealing google doing just that even when location was turned off) than I am worried about russians using a 0 day kernel rce or the FBI trying to decrypt my phone.

Same trap as before. Lots of people don't worry about facebook tracking and don't worry about russians using a 0day on them. Most people won't buy a phone on the premise that google won't track them, they buy phones based on the features it offers and the big numbers vendors print on the spec sheet. Of course lots of people still don't want tracking but together with the previous group they form the majority of people who might not want tracking but ultimately can't be assed to care enough to spend more than a couple dollars per year on avoiding it.

>to all the '*' points you mentioned,how exactly and practically would purism's lack of those features impact the privacy and security of purism under real world threats

First point; separation of concerns is good here, a separated baseband means if for any reason the baseband is compromised, and even open source gets compromised, it cannot damage the phone itself. This makes hacking the phone from the outside through the telephone network rather difficult.

Second point; Auditing by security teams improves code security. As mentioned above, the Linux kernel and many other high profile projects receive a shitload of auditing by security professionals combing through the code because if they don't the world would spontaneously combust about 32 seconds later.

Third point; A secure enclave is very useful. Even some open source projects have them such as the U2FZero because they enable the software to operate on a zero-knowledge principle; it cannot leak your private key if it has no access to the private key. Similarly on a phone, your encryption key for storage can be a very safe 512 bit key and your password is compared using on-enclave software protecting the key itself. This way a state-actor or malicious mussad-level actor can't get your phone (though mussad would just replace it with a uranium bar and kill you by cancer because mussad doesn't care) encryption key because the software will delete the key or simply not grant access if it detect manipulation.

Fourth point; Getting third parties to evaluate your design is helpful, again as mentioned, a lot of high profile OSS projects have third parties scanning the code because two pairs of eyes is better than one.


> If your phone is not secure against outside, malicious actors, then all the privacy you gain is entirely pointless.

Not true,even the most insecure phone is secure against some subset of attackers. If you don't trust the maker of your phone,all other security is useless. Imagine going into war but you suspect your body armor and vehicle is boobytrapped by your own side...

> Having a security team doesn't mean you get closed hardware, it means you have a security team.

Didn't claim othetwise. Closed hardware is needed to control the market well,alternative being to control user data and open up hardware and software.having a security team dedicated to a baseband audit means your profit a very large sum and you're already succesful...

> Not very large and nowadays there is lots of people working on Linux security and they discover a lot of CVE's that are fixed in the kernel and then backported. Early versions of the Linux kernel weren't very well guarded against outside attackers (in fact, we're only just seeing the tailend of when vulnerabilities get introduced to the kernel lift from the beginning of the git history at around 2.6).

Same can be said about windows seecurity,the point was lack of dedicated security teams early on. Even making $10mil profit a year,you'll find it difficult to find one security guy and dedicate resources to support his work. My whole rather obvious point you missed here was the correlation between adoption of a product and ability to dedicate resources like a security team.

> I think, and this is pure speculation based on the text of the OP, that the statement they made is intended to highlight that security teams cost money and Protonmail and DDG aren't able to afford as good security teams as google as a simple function of "how much money can we spend on it".

Somewhat,I understood it as "more money means more security and transparency is much less valuable". Which I disagree with. Transparency and good security hygeine are much more important than throwing money and bodies at it.

> But seriously, this conflates "open source = security" which isn't remotely true and can be easily disproven by adding a backdoor to any open source project and sending you the result. How many people out of 100 would even understand the code, how many of those are able to find the backdoor, how many of those can patch it out and compile it without backdoor?

I made a point out of security being a measured evaluation of risk. Transparency and being open source are variables,just like having skilled developers,a good security procress,good project management and resources like money and time. You need some of each but completely ignoring a variable means everything it multiplies is also 0. Opensource helps improve security,but only as a variable.

> Same trap as before. Lots of people don't worry about facebook tracking and don't worry about russians using a 0day on them. Most people won't buy a phone on the premise that google won't track them, they buy phones based on the features it offers and the big numbers vendors print on the spec sheet. Of course lots of people still don't want tracking but together with the previous group they form the majority of people who might not want tracking but ultimately can't be assed to care enough to spend more than a couple dollars per year on avoiding it.

Most people didn't have sex with condoms either until sexed came along. After fb's facebook fiasco,something like 42% of their users either stopped or dramatically reduced using fb. People have no choice but to buy apple or iphone therefore you're purely speculating here. Most people don't know exactly how bad things are,try showing someone you consider average their google activity and location history and offer them a purism phone and prove me wrong. Most people want a fancy featured phone but they also believe their hard earned money should be enough of a price to pay. They would buy the privacy enabling phone with the same looks and features for a higher price.

For the rest of what you said,it seems you ignored the part where I said 'real world threats'. Name one real world compromise using baseband firmware and I'll donate $100 to a btc wallet of your choosing.


I'm a bit confused on the comment "Modern phones (and all the flagship phones) have had separation between their basebands and APs for years; a modern smartphone baseband is essentially a USB peripheral."

As far as I can tell [0], the flagship phones (i.e. everything except the iPhone) are pretty heavily invested in the Qualcomm integrated baseband CPU + general purpose CPU [1]. And unless something has changed radically in recent years, the baseband CPU has had direct DMA access to the same memory as the main CPU, and thus any vulnerabilities or backdoors in the baseband CPU have the ability to directly access memory.

With the rising prevalence of devices like the LimeSDR [2] putting the ability of intercepting and communicating with the baseband CPU, vulnerabilities in the baseband like this one [3] are even more of a risk than before.

I don't think anyone is arguing that Purism is going to have produced the world's most secure software, but the design space they've put themselves in allows them to be audited internally and externally - something that while you say "Apple and Google have spent a lot of money on it" I can't really verify that it's lead to a quality product. As flaws like Intel Management engine fiascoes have shown [4], even heavily audited code can have terrible flaws. The thing people don't like about the current approach with cell phones is that if the phone is too old you just have to throw it away because no one will update it. Purism is offering you something where you can throw away just the vulnerable modem or wifi card, and keep your phone. Even if you don't know of a flaw, you could purchase a different vendor's M.2 4G LTE card and swap it in, and make your attack surface different than other owners of the Librem 5.

There are other things which Purism will doubtless be way worse at catching/auditing, but honestly this is going to be like Linux: the benefit in terms of security is going to be that you are one of maybe 1,000 people using that device in that specific configuration, and you won't be worth exploiting.

[0] https://en.wikipedia.org/wiki/List_of_Qualcomm_Snapdragon_sy...

[1] https://www.qualcomm.com/media/documents/files/snapdragon-80...

[2] https://myriadrf.org/projects/limesdr/

[3] https://arstechnica.com/information-technology/2016/07/softw...

[4] https://www.wired.com/story/intel-management-engine-vulnerab...


Even though I generally agree with your comment, I don't know where you got that replacement part from. AFAIK Librem 5 is not supposed to be modular and I certainly wouldn't pledge my money for such project unless it set its fundraising goal many times higher than they actually did. In devices like that, there's not much you can do to achieve hardware modularity without fighting with lots of constraints everywhere. The best I expect to see in similar devices is what Dragonbox Pyra does (and what Neo900 planned to do) - a sandwich with two PCBs, with one having the CPU and expected to be upgraded, and other one containing stuff that doesn't need to be upgraded as often, like various sensors, baseband etc. It wouldn't work for Librem 5 though, as it seems to be already very space constrained.

Librem 5's design makes it relatively easy to safely disable and not use the part you think is vulnerable, but you can't really replace them - that would put this project into a completely different budget category.


Sorry, you're absolutely correct - I had misinterpreted the dev board layout screenshots as the final phone layout screenshots.

That'll teach me to post after I should be asleep!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: