Cardelli's Modula-3, a C++ alternative, also illustrates excellent balance of simplicity, programming in large, compile time, and run time. Expanding on that carefully like with macros, a borrow checker, and LLVM integration would have made for a simpler, safer, systems language. Give it a C-like syntax with C compatibility, too, for adoption.
Actually with .NET Native, the GC improvements in .NET 4.6 (TryStartNoGCRegion() and other companion methods), and the C# 7.x improvements taken from Midori, it is quite close.
C# looked a lot more complex than Modula-3 when I last looked at it. The book was thick. They definitely did nice things in C#. I just don't know it's fair to equate it with the simplicity vs get shit done vs small runtime of Modula-3.
I am up for you elaborating a bit on the second sentence since it sounds interesting. Not doing .NET, I don't know what any of those are except the middle one which sounds like SYSTEM/UNSAFE sections.
It is more complex, but Modula-3 isn't that tiny either, around Algol 68/Ada 83 "tiny".
They integrated the improvements from M#/System C# (Midori) into C#.
Namely return ref, ref for local variables, stack allocation for arrays in safe code, spans (slices) across all memory types, allocation free pipelines.
That seems like a good comparison. I'll add you could easily do systems programming in Modula-3 with SPIN OS using it. It also had a standard library with some formal verification of some properties. That it was easy to do systems programming is the big difference.
That said, Go follows the Wirth tradition most among the popular languages. People are learning it fast, coding stuff fast, and the apps are safer. Definitely a success. :)
Off topic, but I just love this article: it's 22.8 KB, with a bit of hand-written JavaScript that the website works well without, and just enough CSS to make reading pleasant.
it is just barely possible to significantly raise the cost of exploiting memory corruption vulnerabilities for projects implemented in unsafe languages.
If there is something positive walled gardens can bring to the table, it would be more of ecosystems where entire classes of exploits are completely mitigated. I think it's something that users and devs should be asking for. (We did it with USB plugs you don't have to flip over.)
I like this article. I want to correct something on C and C++ side.
He hints it's barely possible to write correct C. He doesnt mention how safety-critical sector, esp regulated, cranks out reliable C on a regular basis. They use C subsets with careful coding, reviews, and every automated tool they can. Static analyzers, automated testing, you name it. There's also analyzers that, when they work, can prove absence of entire classes of errors. RV-Match, Astree Analyzer (used by Airbus), and Frama-C are examples. CompCert eliminates compiler errors afterward. Most security-critical development doesnt use tools like these even when companies can afford them. It's apathy, not technological capability.
Now, C++ is special given it was designed to reduce risks of using C. Yet, almost all the verification tooling was for C and Java. C++ is just so complicated that it's hard to statically analyze or even build tools for. That means a C program done like above might have fewer errors than a C++ program. For that reason, I recommend avoiding C++ in favor of either C or safe systems languages compiling to C to take advantage of its verification tooling. They're getting hit with these vulnerabilities because they using a harder-to-analyze language possibly with coding strategy that adds difficulty. See Cleanroom Software Engineering or Praxis's Correct-by-Construction for examples of low-defect, development strategy.
I completely agree with your assertion regarding the verification tools for C, unfortunately most commercial projects (i.e. not regulated) won't pick between e.g. Frama C and C++ and instead will choose between plain old C and C++, thereby making C a very poor choice.
Nitpick: The idea of your comment is sound, but Frama-C is a C program static analysis framework, not a C subset. You still write regular old C even with Frama-C.
Yeah, but I didn't say pick between Frama-C and C++. I said pick between a subset of carefully-written C with all the tooling one could throw at it and C++. The latter comparison is an easier tradeoff to make than the former. Especially with the tools I mentioned with low, false positives. They'd be serving Google well right now. Among other users.
C has gotten considerably better over the last decade with the wide availability of dynamic analysis tools. Its not particularly difficult to run a project's unit tests under each of the various sanitizers nowadays.
I agree with most of this dude's points, except about not starting new projects with C++.
I think C++ still has good strong merit but it definitely needs some critical thinking about where it's used. Other languages are likely better candidates for most things.
The author works on Google Chrome so I'm pretty sure they're familiar with the pros and cons of C++. It's possible that there's a place for using C++ on a new project in 2019 but that would definitely be a decision to justify with a comprehensive process for mitigating the risks of not using e.g. Rust.
Although on HN it looks like there's some kind of competition happening, Rust is a rounding error compared to languages like Java, C# or even Swift which are actually chosen instead of C++ for many projects.
I'm looking at job ads periodically and Rust is almost not at all present. I've talked to someone from the Rust community and they told me that "many" companies are using Rust, but not advertising it and not hiring for it on public channels.
Whatever the reason, the Rust skills market seems to be non-existent, which is a bad position to be in when considering whether to start a project in Rust. Project = commercial product that makes money, not OSS or start-up dreams.
The subtext of Rust in posts like this is that Rust compiles programs that are pin-compatible with C/C++ programs. Without rethinking the architecture of a kernel, for instance, you're meant to be able to write Rust drivers.
I prefer other higher-level languages to Rust; I'd rather just work with garbage collection than wrangle the Rust borrow-checker. But if you need to write software that can't abide GC, Rust is basically the only memory-safe game in town.
nickp mentioned some alternatives, basically verified C or model-driven development for cases where it really matters.
Anyway, if we stick to standard commercial projects, I'd say the languages I enumerated are solid C++ (and Rust) competitors, because C++ at least isn't used to develop only super-performant or low-level bit twiddling SW, and I don't see why Rust would be either.
Obj-C was a solid C++ competitor on anything Apple and now Swift maybe even more so. It's a great language that can get you 95% there and I can almost guarantee that the remaining 5% won't be Rust, but rather C or C++.
Same goes more or less for Android with Java or Kotlin. Having a GC didn't stop them from becoming the platform standard, despite the embarassing growing pains they had.
It's really a game of platforms. Almost no one would pick a language because it's memory-safe, it's just a standard feature for most kinds of software developed nowadays.
For the SW where Rust and C++ would both be good fit (gaming?, audio, video, etc), C++ is entrenched and it's very unlikely that companies will have to justify using it instead of Rust in 2019 or 2020, erc when it's already the default in the first place.
A hard lesson I took from my experience with Turbo Pascal, Oberon and others, is that platform languages always win in the end.
Better syntax, improved semantics are nice, but if the debugging experience, IDE tooling and integration with platform libraries take a hit, then in the long run, the improvements aren't really worthwile.
However, given that Google (Fuchsia), Microsoft (IoT Core, Azure), Oracle (railcar), Amazon (Firecracker) are testing waters with Rust, I guess it might eventually become a blessed language in some of their systems.
But it will take time for sure, specially regarding tooling parity.
I’ve been trying to correct my instincts from that era for the success of open source. It’s a lot easier now to collaborate, especially without things licensing, and the tools built around things like LLVM seem like a significant shift for the costs of a non-default language.
Not really, you just need to have a look at Android, as open source platform.
Java is the platform language, followed by Kotlin, which even with all Google love of latest, there are still a few corner cases where it isn't 1:1 with Java on the platform.
Finally there is C++, which comes on 3rd, with lots of caveats given the NDK evolution and integration into Studio.
Anything else brings their own set of problems, the headache of having the NDK as entry point into Android and JNI as FFI for Android APIs.
I’m not saying it’s easy now, just easier. Thinking back to the TurboPascal/Delphi era you had to convince people to make a large purchase, and your team had to spend a non-trivial amount of time dealing with compiler code generation, debuggers, and a wide range of less compatible operating systems (when I worked for a compiler vendor, our QA test matrix was ~600 different OS & CPU combinations). That’s a lot to fund out of a smaller revenue stream.
Although I like Rust for those scenarios where GC isn't an option, I bet Ada/SPARK still have more commercial deployments out there.
It might change given the price of their compilers versus free rustc, however those are industries that also certify compilers, which isn't going to happen for the time being for Rust toolchains.
Also Rust still isn't pin-compatible with COM/UWP or Objective-C runtime, and I guess we need to wait around one year until mixed language debugging experience is a thing across all major IDEs.
> Although I like Rust for those scenarios where GC isn't an option, I bet Ada/SPARK still have more commercial deployments out there.
I doubt it, if you're counting users. Hundreds of millions of users of Firefox and Dropbox, just to name the first two that come to mind, use Rust code.
> Also Rust still isn't pin-compatible with COM/UWP or Objective-C runtime
What? Of course it is. We're shipping Rust code that uses COM and Objective-C in Firefox beta, right now (WebRender).
Again, we ship COM components. You can't load local fonts on all versions of DirectWrite without that.
Likewise, Rust has had support for creating Objective-C classes with methods written in Rust for ages. You can't usefully render OpenGL into a window on macOS without doing that.
Since Windows 3.x days actually, but only by those masochist enough to use the C API, or low level OS devs on the respective companies.
Anyone else that praises productivity uses programming languages that take care of the lowlevel boilerplate, and offer high level debugging experience on the respective IDEs.
As far as I know, only the Pony Programming Language[0] statically ensures identical guarantees as Rust, but it is not 1.0 and the community is tiny. Pony takes a wildly different approach to achieving those guarantees though - and it's more geared towards highly concurrent servers rather than the wide spectrum of programming domains that Rust embodies (embedded, systems, etc). It uses actor-based concurrency, is AOT compiled, has a concurrent GC without stop-the-world collection and ships with a "cache-aware, work-stealing scheduler" runtime. Similarly steep learning curve as Rust with its Reference Capabilities which is Pony's equivalent to Rust's ownership & borrow checker.
As someone currently looking for a job with Rust, there are tons of opportunities out there. I am obviously a bit of a special case, but I’ve had probably forty inquiries so far, from several FAANGs, YC startups, and others. There's enough that even though there are some places that didn't reach out and that I know are using Rust, I haven't had the time to apply myself.
It is true that there’s no nearly as many as Java or C++, but that’s pretty normal given relative ages.
I think one of the January Who's Hiring jobs was a company doing Rust, but looking for onsite in Berlin or remote in europe. Maybe one in London as well.
I mean, you really are a special case - I have only lightly dabbled in Rust, but I immediately recognize your name because of the work you've done in the Rust community.
If you want an easy time finding a job, Java or Javascript will definitely accomplish that.
It is not however impossible to find jobs in Rust at the moment. I think January's Who is Hiring post had 5 rust job listings. Rust is still not used in a wide range of industries, but you can find a number of opportunities in crypto, fintech, security, and even a few big data shops.
And yes, there's even more jobs that aren't always out there on public channels. Good reason to attend your local rust meetup.
> when considering whether to start a project in Rust. Project = commercial product that makes money, not OSS or start-up dreams
How do you know whether the commercial product is going to make money when you're still considering what programming language to write it in? Until it's built and launched, it's just a "commercial product dream", similar to a "start-up dream".
That’s true in general but what I was referring to are the subset of new projects where someone would pick C++ instead of Java, C#, Go, etc. The latter are definitely more popular in general and I’d expect there to be far more cases where they’d be selected, so rejecting them for technical reasons would tend to bring you into areas like not using a runtime or GC.
Well if you're starting a project now that you expect to live for many years, there's the issue of C++ safety now, and C++ safety in the foreseeable future.
Right now projects have the option of building with the santizers (particularly the address sanitizer) enabled, right? The executable would be slower and less memory efficient, but it would eliminate most of the critical CVEs that exploit invalid memory access. Is there a reason Google couldn't provide alternative builds of Chrome with the santizers enabled?
As for the foreseeable future, you can come up with arguments (convincing or otherwise) that C++ will be effectively memory (and data race) safe in the foreseeable future. At some point, presumably the core guidelines lifetime checker will enforce memory safety. To what degree it will in practice is a matter of speculation, but in theory it could be as effective as the Rust compiler.
Just as an observation, if you peruse the chromium source code, from a modern C++ perspective, there seems to me to be an alarming prevalence of raw pointers. (Probably understandable as chromium's been around for a while.) C++ is now a very powerful language, and if (memory) safety is a priority it is already practical (and performant[1]) to avoid the use of notoriously unsafe elements like raw pointers and unchecked buffers in favor of safer alternatives. (It is similarly practical to avoid data race prone elements[2].)
Interestingly this is in stark contrast to the recent article about learning C++ in 2018 and being perfectly happy with it. I think part of this is because C++ is such a vast language with a huge and diverse ecosystem that it's kind of like saying you should stay out of Africa because it's so dangerous, when it's such a huge place and many parts of it are relative safe. Maybe the two authors experienced different "parts" of C++ and are referring to that subsection only.
You can learn and use C++ in 2018 and be perfectly happy with it.
This article is arguing that for safety/security, you shouldn't use it. For the majority of projects that are more complex than Hello World, you'll be venturing out of "relatively safe".
The problem with C++, is that unless you have the source code from everything available and the team agrees on making proper use of static analysers, there isn't any guarantee that a rogue team member, or a binary dependency isn't making use of unsafe bad practices.
Whereas in some systems languages with modules, if you make use of unsafe code blocks or pseudo-module, the module gets tainted and you can even block its use.
Using -Weverything other than for a one-off, manual build, is a sure fire way to result in one or several of 1) hacky code that works around some compiler author's style preferences, 2) less safe code because of the workaround hacks, or 3) coworkers quickly learning to ignore or disable (e.g. using inline #pragma's) diagnostics.
For both GCC and clang, -Wall comes close to the line of rejecting objectively correct and good code, but doesn't quite cross the line. -Wextra crosses that line (expect it to reject some correct and good code) but the pros often outweigh the cons. -Weverything blows past the line and wraps around the earth several times.
It's hilarious, after the event-stream fiasco last year, NPM has changed...Nothing. NodeJS made a 'package maintenance' repo, but failed to address the actual root cause of the problem.
We'll see plenty of other NPM disasters this year, and as always, nothing will change because security is an afterthought not a core mission principle.
If it was, NPM wouldn't exist in its current form. It's confusing, adding extra security wouldn't even be that difficult or time consuming, they just don't.
I wish it were possible to take over a project with individuals who actually care. The people in charge of NPM are incompetent at running a secure, large scale distribution system, why they're still allowed to be a part of it is beyond me...
If security isn't your top goal you don't belong in charge of a very important, widely used ecosystem which many rely on.
How do you see secure package management? What extra security would you add?
As I see it people want to use the code third parties control but without trusting them somehow. Package management alone can only go so far. Code just has too much privileges in a language like javascript and no matter what you do something coming from third parties can pwn you or at least facilitate that.
There needs to be a language where code doesn't have so much privileges. Imagine if the language only allowed to import functions that either have no side effects or each side effect required explicit security token, say for reading some file or connecting to some remote host. Instead of passing strings around and let any function have unlimited privileges to read any file, the code would pass security tokens around that let functions only read specified files. Think language level capability-based security. I can't think of anything else that can help with random third party code.
Securing a dependency management system like npm, pip, et al is not a solved problem by any stretch. This isn't about the competency of any particular individuals, but the willingness of the software engineering community as a whole to always trade security for speed (of development, of shipping, etc).
Since you said "even discussing", might as well mention that the Rust community has discussed adding 2FA support to cargo: https://internals.rust-lang.org/t/requiring-2fa-to-publish-t.... But as far as I know, crates.io/cargo still has no 2FA feature of its own.
On the other hand, if your crates.io account is linked to a GitHub account, you could have 2FA via GitHub. So I'm not entirely sure whether "cargo supports 2FA" would count as true or false at the moment.
Obviously this is different than implementing it from scratch, but crates.io (the Rust package repository) does authentication through Github OAuth, which in practice gives you 2FA.
Well tuned and calibrated surveys, coded free-form responses (takes care to implement), net promoter scores, heck, at scale it might make sense to hire polling agencies to do this.
But what if I want to know why you're not using my product every moment of every day, and what I'd have to change to do that?
If you're satisfied with my product, but you only buy it once a year, that's bad for me. If you're not very satisfied with my product, but you buy it 30 times a year, that's better.
I'd rather have 30 sales than 1 sale, even if you badmouth my product and lose me 15 more sales down the line.
TBH I may not give a crap how "happy" you are if you're still engaging.
> But what if I want to know why you're not using my product every moment of every day
Did you try asking me?
> and what I'd have to change to do that?
Again, did you try asking me?
> If you're satisfied with my product, but you only buy it once a year, that's bad for me. If you're not very satisfied with my product, but you buy it 30 times a year, that's better.
If I'm satisfied with your product what makes you think I'd be more satisfied with it if I had to buy it multiple times per year?
> I'd rather have 30 sales than 1 sale, even if you badmouth my product and lose me 15 more sales down the line.
You assume every sale is the same price. If you sell your product to 30 people, are those 30 people generating the same amount of revenue as the 15 people could have if they were more satisfied with your product?
> TBH I may not give a crap how "happy" you are if you're still engaging.
That is, in a nutshell, exactly why I'm not satisfied with many online products and have actively disengaged from them. Your line of thinking leads down a road where the end user is nothing except a source of income to you. You've completely forgotten that you're supposed to provide a service for humans. It shows a complete lack of respect of users' intent and instead you wish to push users to generate more money for you potentially at the cost of the users themselves.
My line of thinking is what makes companies profitable.
You can moralize all you want, but the incentives align for a company to maximize on profitability, not on "hey my customers like me". You can't pay for dinner with goodwill, but you can pay for dinner with money.
If I'm being so disrespectful, why is my engagement high? Do people love to be disrespected?
You're talking about what ought to be, as if it were how it is. It's not that way. It's how I describe. Maybe it should be some other way, but it's not.
> My line of thinking is what makes companies profitable.
I don't disagree, however unfortunate that is.
> You can moralize all you want, but the incentives align for a company to maximize on profitability, not on "hey my customers like me".
Let's change that so that the incentives do work for being liked.
> You can't pay for dinner with goodwill, but you can pay for dinner with money.
Somewhere there's some middle ground which represents the failure of our government to incentivize; where companies may still be profitable (even obscenely so), and employees are able to pay for dinner, and customers are happy, and people aren't being manipulated.
> If I'm being so disrespectful, why is my engagement high? Do people love to be disrespected?
Love to be? No. Most have become desensitized to disrespect such that they expect to be abused by collusion between big corporations and big government.
> You're talking about what ought to be, as if it were how it is. It's not that way. It's how I describe. Maybe it should be some other way, but it's not.
I'm talking about what ought to be, indeed. I recognize it's not that way. Do you recognize that it doesn't have to be that way? Don't you agree that it should be some other way?
> If I'm being so disrespectful, why is my engagement high? Do people love to be disrespected?
There is a large difference between what people "want" (at a given moment) and "like." It's very much possible — and I'm pretty sure you're currently advocating it — to make people "want" things they don't actually like. Yeah, it's a weird glitch in our brains.
Also, you are making a very good case for destroying capitalism, so that's good.
And that's all that's wrong with engagement in a nutshell. You prioritize your needs over the user's needs. It's not about trailing or leading indicators, it's about not burning down the rest of the world just to benefit yourself.
"I may not give a crap how 'happy' you are if you're still engaging" - you've really put it succinctly. That's the mentality of a drug pusher.
You're missing one critical point -- lack of engagement means you can't iterate with the customer. No engagement means you can't improve, and you can't make your customer happier.
Nothing happens without engagement. You can survive an unhappy customer as long as they're still around, even if they're only providing feedback and aren't likely to buy again.
You can't survive nobody coming into your shop in the first place.
> You're missing one critical point -- lack of engagement means you can't iterate with the customer. No engagement means you can't improve, and you can't make your customer happier.
You can iterate with the customer without being abusive or invasive about it. Show exactly what's being tracked and provide mechanisms for the person to decline its use. Recognize that the customer is valuable; compensate them for their time if you're tracking what they do or answering questions about their experiences.
> You can't survive nobody coming into your shop in the first place.
Even untracked customers can continue on to purchase goods and services. If you can't build a business site which is capable of basic business operation then perhaps you should hire competent web developers and/or a good marketing team; or perhaps your business wasn't so needed after all.
If someone badmouths the product, they aren't satisfied. And even if they're engaged with it, odds are high they're looking for something better.
Besides, the only reason engagement became a thing is because the way people are paid is based on ad revenue instead of actual payment by the person using the product. Satisfaction is king when the user and the customer are the same person. Just ask any insurance company (for example).
Whatever happened to user studies? If you want to find out why your product sucks, pay your target audience to use it! I bought a laptop from Gigabyte a little while back that they wanted feedback on, so they offered a $30 woolies card for filling out a form.
That's a little bit of a Catch-22. "Engagement" is not a well-defined metric, and in practice when people try to measure it, they end up justifying the use of any number they happen to be able to measure at the moment. So almost any number can plausibly be called an "engagement score" or something similarly vague.
Your goal should be for something more specific, and ideally at least a tad more concrete. Things like "product satisfaction," "positive impression of brand," and "purchase intent" could all be synonyms for "engagement" but they are not synonyms with each other because they are all more specific and narrowly-defined. And furthermore, they're better guidelines because particular features or activities could be well-suited for one and not another, so optimizing towards one of those actually provides some focus.
Interesting write up. Having seen iOS in 'The Good' section with no mention of Linux or Windows, I thought things might get interesting in "The Bad" or "The Ugly". But there was no mention at all. That was a bit jarring, though I respect the author for omitting them if it is just ignorance in those areas. I'm not sure if there is a connection or intent, but the result may be a (tacit?) endorsement of Apple.
I'm an enthusiastic Linux user (called myself evangelist the first years of using it) and was always on a mission to defend it against the dark evil empire that was Microsoft. Apple was only used by graphic design types back then.
Now that I've established how much I love Linux and the FOSS movement (I'll still defend it over proprietary works) , I'm also a realist. Realist because when you do a lot of security dev then reality on Linux looks different today. Sticking for sake of discussions with monolith kind of kernels (comparing oranges with oranges) I absolutely hate the security design choices of modern Linux. Linux could do a lot better in this regard and Torvalds doesn't do the industry a favor when he calls security experts a bunch of masturbating monkeys. I can see where he comes from but the attack surface on any type of Linux (whether embedded or not), I'd go as far to call it total trash when compared to its BSD cousins. A lot of this isn't just Linux's fault considering how big the system is today. To realistically secure a Linux system is almost impossible (within budget+time that is) when compared to BSD.
The only reason why I still put up with Linus is because of FOSS. But if it were purely for security (on a desktop system) I'd have to say Linux is really poor.
It hasn't always been that way. My theory (really just theory) is that while Microsoft got hammered for decades for having shit security (remember trustworthycomputing.com ??). And I think the message slowly but gradually sank in at Redmond. Microsoft (and I still get hick-ups when saying it) somehow got their act together. I believe all that pressure amounted to something in the end while the Linux community missed the train. Not to say that there aren't brilliant ideas coming out of that community but there is very little pressure to provide integrated security that actually protects the user in a coherent way. You get lots of little pockets of good and bad (as is the nature of FOSS) and integrating it isn't even done well by the major distributions.
Again not saying Linux can't be secure but to make it good depends a lot on who integrates and sets up the system. And for the end-user who only has a semi-technical background (never heard of threat-modeling or grsec ... for them it's a dumpster fire).
I'm not comparing Linux to Apple here because Apple security is brilliant and thought all the way through to the UI/UE level. Linux in that regard doesn't even stand a chance. @thegrugq has a pretty good guide for OpSec/ComSec which is still relevant to this very day that spells out the tough reality (which a Linux fan like myself doesn't always appreciate hearing): https://news.ycombinator.com/item?id=8950875
Thank you for the comment, I wish the original article included much of the same.
I'm a 100% Linux user for the last 8 years and a fellow enthusiast. That said, I'm a web-app programmer and worry greatly about security. I have no idea about the internals of Linux or any other OS and really have no time to learn it. I NEED to be able to rely on people who know this part of CS.
So, with that said, and with honest and open intrigue: If to "...realistically secure a Linux system is almost impossible", then how does the internet not crash and burn? It is my understanding that most servers run on Linux, and their exposure to the internet makes them inherently extremely available for targeting. Are they all secretly compromised? Waiting for some global signal? Certainly, there are thousands+ of Linux-based VPS's out there set up by folks with only the most basic skills that have had sites up and running for years (self included). Are those just automatically pwned, and we can expect large swathes of the internet to stop working at any moment because the "dumpster fire" of Linux was used at the heart?
Should I switch my server to an Apple-based one so that I can benefit from the best security?
"Dependency slurping systems like NPM, CPAN, go get, and so on continue to freak me out. They might potentially be more dangerous than manual dependency management, despite the huge risks of that practice, precisely because they make it ‘easy’ to grow your project’s dependency graph — and hence the number of individuals and organizations that you implicitly trust. (And their trustworthiness can suddenly change for the worse.)"
I always pin dependencies to fixed versions in production software, which makes it less likely that my toolchain gets poisoned by a rogue version update. Also, for languages that allow it I bundle dependencies locally (in Python e.g. using Wheels) or run my own registry proxy, which makes poisoning by registry takeover also less likely. Personally I think there are bigger threats than malicious dependencies, especially if an adversary specifically targets me. My biggest concern is what happens when someone takes over our CI/CD setup, as that gives way more access to an adversary.
Implicit trust is easy to solve by forbidding it. If someone doesn't want to maintain his project anymore, someone else could make a fork and try to promote it and try to gain trust again, but not take over that project. While this particular problem can be solved, in reality it doesn't matter, because trusting third parties with all the poor security they have is a much bigger security hole.
How easy is it to get a BSD working on a laptop these days?
Once you get past UEFI, installing Linux on a laptop isn't particularly difficult. The challenge is that drivers for wifi, audio, webcam, etc, will be very hit or miss depending on the hardware. My impression is that finding reliable drivers for BSD is even more difficult.
Are there laptops that are well supported by BSD (not including including MacOS)? Thinkpads maybe?
never had any problems on my laptops with any of the hardware (3 installations in the past 5 years). Even got it working on a "Tuxedo" Linux system (would never buy from this company again). Here some recent articles that seems to indicate I wasn't just lucky (mileage may vary but no doubt OpenBSD came a long way):
I'd maintain BSD is more arcane but there are less ways to shoot yourself in the foot with OpenBSD than with Linux. The design is coherent and still minimalist sticking to the UNIX design philosophy. BSD doesn't have systemd which suffers horribly from scope-creep (HTTP server for journal events (WTF), binary logformat, default to google resolvers[1] ...). All daemons of the base system are chrooted by default. Base systems has also really good hardening which is unmatched on Linux. (Linux has AppArmor or poorly managed SElinux out of the box). Also auditing. BSD code gets audited by people with a security mindset who I happened to follow and respect since a very long time. As a C developer myself I am confident to say they do know how to think in these terms, while the Linux maintainers are known for hurling abuse against anyone who raises an issue (if you're that lucky to be addressed more likely topics that require design considerations and not just fixes will get you ignored). It's miles apart in that sense.
All in all you need to be much more skilled to harden Linux (which gives plenty of opportunity to trip up). I could go on and on.
These are all security relevant issues imo.
On a more personal not it appeals to me because the community is incredibly knowledgeable. Like it was with Linux 10-15 years ago and before main-stream users clogged up the forums with "how to change my desktop". I know this sounds elitist but being strapped for time it really is refreshing not having to wade through hundreds of copy/paste blogs that are all outdated and obviously written by hamsters.
You can go back and forth about the marginal security differences of the two platforms and how typical and optimized security configurations alter their total security.
But in reality, they're both very similar operating systems with fundamentally similar security models. It's a little like arguing about whether Python is a more secure programming language than Ruby.
I think it was "People updating their iOS" more than iOS unqualifiedly. If people are Patching Their Shit that seems like a security win regardless of platform.
ARM is a bit more secure than x86 in that there's one unambiguous way to read a sequence of ARM instructions but with x86 you might have two entirely different but completely valid instruction sequences if you start reading at address FOO versus address FOO+1.
> Actually, file descriptions. The description is the kernel state pertaining to an open file. The descriptor is a small integer referring to a file description. When we send an FD into a UNIX socket, the descriptor number received on the other end might be different, but it will refer to the same description.↩
Oh man, that perfectly explains a formerly-unanswered question I had about how that mechanism worked. Thanks for linking.
ELI5 : It is 2019. Why isn't everybody using pubkey crypto for email?
i.e., transparent support of:
* I publish a public key
* Anyone can use it to send me an encrypted email
* I use my private key to read
* as a bonus, I sign my email with my private key
* anyone can verify with my public key
1. Key distribution is a difficult problem. Where do you go to look up what the public keys are? The only obvious answer is the email provider, which brings us to...
2. Your email is already in practice secure from everyone except you, anyone with access to your email address, and your email provider. Your IMAP, SMTP, and POP connections are all secured with TLS anyways, and the MX connections between your email provider and your recipients' email providers are likely to be secured with TLS as well if at least one of them is Big Email Provider. (Even if that's not the case, this communication is occurring over the Internet backbone which is generally only tappable by the sorts of people who could probably get into your personal email account via other means anyways--the extra value is probably illusory).
3. Secure email doesn't actually secure any headers. Given that From/To/Date is going to be implicated by SMTP logs anyways, and these are the most useful headers for intelligence analysis, there's not much that oculd have been improved.
4. You lose the ability to do full-text search (or even header search, if you designed it to hide the headers as well).
5. Anti-spam measures become basically impossible with encrypted email; you'd have to push everything client-side, which is going to get annoying when you're trying spam detection on your smartphone for 10x the messages and you have to spend extra time decrypting each one.
6. Deploying any automated public-key infrastructure requires somehow automating the verification process. The verification process that can be automated is "this came from a person who has access to this email account," but the verification process that people want is "this came from the person whose name is associated with this email account," which we don't know how to automate.
Deploying any automated public-key infrastructure requires somehow automating the verification process. The verification process that can be automated is "this came from a person who has access to this email account,"
What if all gmail and Microsoft email accounts came with a 4096 bit public key, with an option for the state DMV to verify the public key's association with a driver's license? I'm sure this would come with downsides, but would they be much worse than state DMVs using your gmail account to reset their web app passwords?
1. It removes the opportunity for free/cheap email providers to commercialize their users data.
2. It makes it difficult for corporate oversight of employee emails.
3. Privacy is a differentiator only for a tiny fraction of the marketplace. So, there is very little profit incentive to invest the engineering resources to implement easy pub key into email.
Because it is a giant pain in the rear. Email is never going to be an end to end secure protocol. Period. It’s never going to be transparent. S/MIME could have gotten the world there but email has too much inertia. Ultimately you have to figure out how to trust keys, that turns out to be the hard part. Use Signal.
I get that "trust" is the hard part, but for a circle of people that I directly know it shouldn't be hard.
I wasn't expecting the email protocol to change, just that I (my email client) would send / received cypher text (e.g., base64 encoded). The cypher text would be transmitted just like any other message.
You're right! Sending and receiving safely encoded and encrypted ciphertext is so easy and straightforward that it should be common by now.
Perhaps it could be worth considering that there could be complexities from another angle? Key discovery and management are complex tasks that require a working knowledge of PKI. These are also tasks that can be very dangerous to undertake in ignorance, as even a small mistake can lead to a full compromise. Is it possible that many users find these difficulties highly challenging?
SMIME simply included the public key with every message. Solving initial discovery could be as simple as using a well known address, e.g.: www.emailprovider.com/.well_known/kalium.pub. As for key management Apple does a good job managing this (e.g. my understanding is that Apple Messages are E2E using keys yet I've never managed on of these keys), other software vendors would need to step up.
So, it's not a user problem, but an incentives problem. Companies aren't incentivized to solve this problem. OSS could, but the network effects mean that any solution is dead unless you can get the likes of gmail on board.
You're right! Those are all excellent solutions to basic discovery and management problems.
With that said, is it perhaps worth considering that this could be a scenario where fully automating something is in fact not sufficient? As you so wisely and correctly point to, Apple has made key management work. Yet most Apple users probably wouldn't notice, or know what it meant, if they were told that whoever they were texting had a new key. From experience this is already true of messaging systems like Signal and Wire. Making available documentation that clearly and simply explains the matter and what the user could or should do has, historically, not reliably been a great way of resolving this issue.
It might be worth considering that there could be more at hand that an onerous task that just needs a smidge of standardization and a pinch of automation - though to be clear you are completely right that both are needed and would be very beneficial. Real security generally needs to involve users understanding and weighing risks. I've yet to see any method to automate that well, though I would love to be shown how I am badly mistaken.
91% of incoming email to Gmail uses opportunistic TLS. STARTTLS has proposals to harden its security. MTA-STS uses DNS to advertise a domain will always use TLS for email. DANE/DNSSEC is the ultimate solution though that's very ambitious.
In practice, most emails between major domains will be encrypted over every public link. Only the two providers and the two parties will know either the contents or the metadata.
Whether you can trust the providers is a different discussion. But if you can't trust the companies, then you also couldn't trust most software either. The encrypting and decrypting have to be done somewhere.
The existing tooling is hard to use and not enough people care, nor are made to care (e.g. tons of companies handling data over e-mail, but never really pushed by laws (or by enforcement of laws) to use encryption)
I think the reason is simpler than that: big e-mail providers want to read your emails to advertise to you, so they sabotage and oppose end-to-end encryption.
http://lucacardelli.name/Papers/TypefulProg.pdf is now next on my list when I finish reading A Philosophy of Software Design (which is brilliant if you haven't seen it).