This article touched on a point that I feel is very relevant: unexpected show cancellations, apparently now happening for Apple TV+, as well.
Netflix and Disney+ trained me to not even watch a show until it's concluded because it could get cancelled and I don't want to invest my small amount of free time on entertainment that might not even finish. It does produce a self-fulfilling prophecy where people with the same mindset as me on this do the same, and then the rating for something I (and probably they) are interested in aren't high enough and it gets cancelled.
What should worry them, though, is that it also led to the final step for them; I cancelled my Netflix and Disney+ subscriptions with no intention of renewing them around a year ago. The end result is that "TV series"-style shows are effectively dead to me; I've shifted my time on them mostly to novels (that are basically behind-the-curve on this trend, hopefully forever), followed by single-player video games, and finally movies. (Why didn't movies take the first slot? Because I'm only willing/able to give 30-60 minutes of continuous time to entertainment most of the time, and it's very unsatisfying to pause a movie to resume later.)
The continuous, immediate feedback on series performance coupled with a reputation of acting on that feedback immediately is killing the traditional television medium.
On top of all of that, Apple TV+ has the added albatross of requiring their hardware for the shows, as if they were somehow a siren song to get people more tightly nestled into their ecosystem, and therefore dooming their shows to failure, at least amongst people who don't want to pay for overpriced hardware running software of degrading quality over the years (I switched to Linux in 2016 because it was more reliable than my MacBook Air; being better than Windows isn't good enough anymore, especially when Linux has a greater catalog of software these days).
The needs of Apple, Inc weigh on their Apple TV division, they don't help it, and the sins of the streaming services against actually finishing a story further increase the trust deficit with Apple TV+. No amount of marketing is going to turn that around.
`drop` is an optimization. You never have to call it if you don't want to, Rust will automatically free memory for you when the variable goes out of scope.
Rust won't let you do the wrong thing here (except if you explicitly opt-in to with `unsafe` as you note is also possible in other languages). The Rust compiler, when writing normal Rust code will prevent you from compiling code that uses memory incorrectly.
You can then solve the problem by figuring out how you're using the memory incorrectly, or you could just skip out on it by calling `.clone()` all over the place or wrapping your value in `Rc<T>` if it's for single-threaded code, or `Arc<Mutex<T>>` for multi-threaded code, and have it effectively garbage-collected for you.
In any case, this is orthogonal to safety. Rust gives you better safety than Python and Java, but at the cost of a more complex language in order to also give you the option of high performance. If you just want safety and easy memory management, you could use one of the ML variants for that.
You don't really seem to be understanding the point I'm making, or perhaps don't understand what memory safety means. Or perhaps are assuming I'm a Rust newcomer.
> Rust won't let you do the wrong thing here (except if you explicitly opt-in to with `unsafe`
There is no "except if you" in this context. I'm talking about unsafe Rust, specifically. I'm not talking about safe Rust at all. Safe Rust is a very safe language, and equivalent in memory safety to safe Java and safe Python. So if that's your argument, you've missed the point entirely.
> In any case, this is orthogonal to safety.
No, it's not orthogonal - memory safety is exactly what I'm talking about. If you're talking about some other kind of safety, like null safety or something, you've again missed the point entirely.
> ... calling `.clone()` all over the place or wrapping your value in `Rc<T>` if it's for single-threaded code, or `Arc<Mutex<T>>` ...
This whole paragraph is assuming the use of safe abstractions. If you're arguing that safe abstractions are safe, then, well... I agree with you. But I'm talking about raw pointers, so you're missing the point here.
You're moving the goalposts. Your original post had zero mention of unsafe Rust. You have now latched onto this as somehow proving Rust is less safe than Python and Java despite also mentioning how Java also has unsafe APIs you can use, which nullifies even your moved goalposts.
Btw, Python also has unsafe APIs[1, 2, 3, 4] so this doesn't even differentiate these two languages from each other. Some of them are directly related to memory safety, and you don't even get an `unsafe` block to warn you to tread lightly while you're using them. Perhaps we should elevate Rust above Java and Python because of that?
No goalposts have been moved here. Rust is a programming language with both safe features and unsafe features. It is a totality.
And now you're linking me docs talking about things I already explicitly mentioned in my past comments.
You are so confidently ignoring my arguments, and so fundamentally misunderstanding basic concepts, that this discussion has really just become exhausting. I hope you have a nice day but I won't be replying further.
Yes, Rust is a language with safe and unsafe features. So is Java and Python (and you admitted that in your comments). So Rust is not any less safe than Java or a Python but that logic, and the original point you’ve made in the first comment is incorrect.
Actually Rust is safer because its unsafe features must be surrounded by ‘unsafe’ keyword which is easy to look for, but you can’t say that about Java and Python.
I can't think of anything in either Java or Python that is memory-unsafe when it comes to the languages themselves.
You can do unsafe stuff using stdlib in either language, sure. But by this standard, literally any language with FFI is "not any less safe" than C. Which is very technically correct, but it's not a particularly useful definition.
Standard library is an inherent part of the language.
There is no difference for the end user, whether the call to `unsafe` is a language builtin or a standard library call. The end result is, all of those languages have large safe subsets and you can opt-in into unsafety to do advanced stuff. And there isn't anything in the safe subset of Java / Python that you would need to use unsafe for when translating it to Rust.
Again, by this standard, literally any language with FFI is "unsafe". This is not a useful definition in practice.
As far as translation of Java or Python to safe Rust, sure, if you avoid borrow checking through the usual tricks (using indices instead of pointers etc), you can certainly do so in safe Rust. In the same vein, you can translate any portable C code, no matter how unsafe, to Java or Python by mapping memory to a single large array and pointers to indices into that array (see also: wasm). But I don't think many people would accept this as a reasonable argument that Java and C are the same when it comes to memory safety.
So you can see that the fact you can invoke unsafe code is not a good distinguishing factor. It is the other, safe part. Rust, Java and Python all have huge memory safe subsets that are practical for general purpose programming - almost all of the features are available in those safe subsets. C and C++ do not - in order to make them memory safe you’d have to disallow most of the useful features eg everything related to pointers/references and dynamic memory.
Agreed. My personal experience is Rust is more safe than Python as you get runtime errors when your interpreted Python code has a type error in it, but that's a compiler error in Rust so you don't have an "oopsie" in production.
Much harder to write Rust than Python, but definitely safer.
(Rust vs Java is much closer, but Java's nullable types by default and errors that are `throw`n not needing to be part of the signature of the function lead to runtime errors that Rust doesn't have, as well.)
I'm talking specifically about memory safety (when using unsafe/raw pointers). Being able to say "once I allocate this memory, the garbage collector will take care of keeping it alive up until it's no longer referenced anywhere" makes avoiding most memory safety errors relatively effortless, compared to ensuring correctness of lifetimes.
You can absolutely opt-out of lifetime management in Rust. It's not usually talked about because you sacrifice performance to do it and many in the Rust community want to explicitly push Rust in the niches that C and C++ currently occupy, so to be competitive the developer does have to worry about lifetimes.
But that has absolutely nothing to do with Rust's safety, and the fact that Rust refuses to compile if you don't provide it a proper solution there means it's at least as safe as Python and Java on the memory front (really, it is more as I have already stated). Just because it's more annoying to write doesn't affect it's safety; they are orthogonal dimensions to measure a language by.
Most memory safety errors are from not being able to test things like whether you are really dropping references in all cases or whether your C++ additions are interacting with each other. C is not safe but it is safer than C++. Rust is not going to stop all run away memory possibilities but it isn't going to hide them like a JS GC.
If your goal is to ship to most users something that kind of works then there are certainly complex solutions that will do that.. If your goal is memory safety that's more like every device working as expected which is done with less bloat not more.
I personally only use AMD (excepting one test machine), but Intel does have the best single-thread performance[1] so if you have some crufty code that you can't parallelize in any way, it'll work best with Intel.
The new Zen 5 has a much better single-thread performance than any available Intel CPU.
For instance a slow 5.5 GHz Zen 5 matches or exceeds a 6.0 GHz Raptor Lake in single-thread performance. The faster Zen 5 models, which will be launched in a couple of days, will beat easily any Intel.
Nevertheless, in a couple of months Intel will launch Arrow Lake S, which will have a very close single-thread performance to Zen 5, perhaps very slightly higher.
Because Arrow Lake S will be completely made by TSMC, it will have a much better energy efficiency than the older Intel CPUs and also than AMD Zen 5, because it will use a superior TSMC "3 nm" process. On the other hand, it is expected to have the same maximum clock frequency as AMD Zen 5, so its single thread performance will no longer be helped by a higher clock frequency, like in Raptor Lake.
> in a couple of months Intel will launch Arrow Lake S, which will have a very close single-thread performance to Zen 5
Will they? Intel Innovation event was postponed "until 2025"[1], so I assumed there is not going to be any big launch like that in 2024, anymore? Arrow Lake S was supposed to debut at Intel Innovation event in September [2]
The Intel Innovation event was canceled to save money. This has nothing to do with the actual launch of future products, which are expected to bring more money. Intel can make a cheap on-line product launch, like most such launches after COVID.
Since the chips of Arrow Lake S are made at TSMC and Intel does only packaging and testing, like also for Lunar Lake, there will be no manufacturing problems.
The only thing that could delay the Arrow Lake S launch would be a bug so severe that it would require a new set of masks. For now there is no information about something like this.
The suggestion to use AR glasses with this keyboard computer feels very Ghost-in-the-Shell cyberpunk to me. Stepping onto a train and you find some guy with glasses sitting near the train door staring blankly at other passengers while typing furiously on the keyboard. Looks a bit creepy. After a moment it's revealed he has an AR display and he's writing an email or whatever.
...why do I feel nostalgic for a cyberpunk dystopia?
I think IRC would slightly enhance the cyberpunk aesthetic, and is likely to give strong competition to email for the 'eternal communication protocol' contest!
I'd be curious about the fidelity floor in analog vs digital optics.
I'd guess(?) that you might be able to do more information reconstruction from analog + lense parameters + film parameters than digital + lense parameters?
Simply by virtue of digital being quantitized at some point.
(But signal processing is far outside my area of expertise, so honestly curious)
The year in this link is very important. In the following year, the Elm team decided to not pay attention to the maxim "perfect is the enemy of good" and crippled their FFI story, making it impossible to actually use the language in production[1].
I would recommend to steer clear of a language that makes these sorts of decisions -- that certain features are off-limits to the regular developer because they can't be trusted to use them correctly -- because if you find yourself in a situation where you need that to solve your problem, you're trapped. I included Go in the set of languages I would recommend steering clear of for years, due to their decision to allow their own `map` type be a generic[2] type but no user-defined types could be[3], leading to ridiculously over-verbose codebases, but they have finally corrected course there.
If you're looking for something kinda like Elm but not likely to break your own work in the future, I'd recommend checking out ReasonML[4] instead.
It's incredible that on just about every piece I've ever read about Elm since they made that decision, this has been the first, second, and third comment. Wanting to try Elm for myself, I disregarded this advice, and.... immediately ran into the exact same problem! I've never seen such a promising project so conclusively killed by pure developer pigheadedness. And, amazingly, they've never backed down at all. They don't seem to mind that they maimed themselves.
A purity pledge is very typical of cults. It's both a filter and an enforcement mechanism.
This may not apply to Elm. But I imagine it can feel easier and more rewarding to manage a community that's more like a cult than a typical free-for-all open source project.
I think it’s probably harder and less rewarding to manage a community where you’re constantly taking flak for a technical decision people don’t like (and which those people generally don’t engage with the pros and cons of said decision!)
Out of curiosity, what did you try to do that you hit that issue right away? I've been writing Elm apps as side projects for years, and never even come close to the kernel thing being a problem. My apps are mostly graphically undemanding games and helper tools. What are the types of applications where this becomes an issue right away?
In my case, it was a regex supplied by the user. Elm 0.18 had no support for constructing a regex at run-time. So I made a package that wraps native RegExp. When 0.19 was released, I couldn't upgrade because of those 5 lines. The regex package eventually got regex.fromstring(). So I could've upgraded. But at the time I was bumping against limits accessing Intl and I really hated the prospect of begging some maintainer for access to a browser api.
Elm was the most fun I ever had developing a browser app. Then they decided I shouldn't be allowed to develop a ShootMyFoot module, and it stopped being fun overnight.
> So I made a package that wraps native RegExp. When 0.19 was released, I couldn't upgrade because of those 5 lines. The regex package eventually got regex.fromstring(). So I could've upgraded.
So it seems like by the time of the official release you could have replaced your five lines with `Regex.fromString`.
But the missing Intl API is definitely a huge pain, and I understand that you were switching away if you needed it extensively. Or expected to want other sync APIs wrapped.
A common way to solve something like this is with proxy objects like in https://github.com/anmolitor/intl-proxy but it does not give access to every feature in a nice way.
I went the route of least resistance and built the Elm compiler without the Kernel code check. But in the past few years I hardly needed that anymore.
Yeah, I really feel this a good way to divide developers into two types. There are those like me, to whom the philosophy of discourage foot guns systematically sounds kind of brilliant. To put it in flattering terms, it's pay a short term cost for the long term and hard-to-perceive but very real benefits (making certain categories of errors completely extinct). To put the other side in flattering terms: they're not letting the perfect be the enemy of the good, and never compromising on their vision because the tech is holding them back. I think the latter is definitely dominant in the discipline. I'm glad that at least Elm carries the torch for the former though.
The Elm people themselves worked with the impure browser api all the time. They just don't want me to do it. So it's not even a foolish consistency but just base gatekeeping. Turns me right off.
If you want to divide into two camps how 0.19 was received I'd say it's people who were maintaining a substantial Elm project on the one side and people who weren't on the other. Maybe if you're carrying a torch don't drop it on the ecosystem.
There are a bunch of other options/workarounds/hacks depending on the need. E.g. using getters or creating proxy objects https://github.com/anmolitor/intl-proxy, or event listeners, or postprocessing of the generated JS code, but those shouldn't be the first idea to reach for.
Yes the answer is always ports when this topic comes up. Unfortunately one can only pass some basic types through ports. Passing a RegExp or an Intl.DateTimeFormat is not possible. It needs a wrapper on the Elm side, and the Elm people decided I can't be trusted to write this wrapper for me.
Back then if I wanted to search a list of strings for entries that match a regex supplied at run-time, I'd have to pass the whole list through a port, filter it outside, then pass it back in. Rather than just using the filter function. Ports are asynchronous messaging which means I have to restructure the whole code and wait for a state change when the filtered list is returned.
Let me cite the Elm docs on ports[1]: "Definitely do not try to make a port for every JS function you need."
So! Where does that leave me? Unsupported, that's where. Because I need a JS function. In 0.18 unsupported was fine. They broke it for 0.19 and the project died. Maybe it was dying of other causes anyway, but that one action sure drove people away.
If time is important to you, please correct your statement.
> The year in this link is very important. In the following year, the Elm team decided to [...]
The blog post was released a year after the official release of Elm 0.19 where the access to native code was further restricted in the official compiler.
It was not something that happened without ample prior notice, see for instance a post [1] by the Elm language creator in March 2018 in which he explains his reasoning for the upcoming change.
Or another in March 2017 where he announced that intended change [2]. Even in 2015 he actively discouraged people to rely on these undocumented features and other hacks [3].
I also was not happy with that choice and felt the pain of something being taken away that was possible before, but that didn't stop me from using Elm at work nor from using it for fun.
So far I haven't found an alternative that I liked better, so I will stick to it.
> I would recommend steering clear of [Go] for years, due to their decision to allow their own `map` type be a generic[2] type but no user-defined types could be[3]
I think this is very, very different. First because Go didn’t have a cultish purist aversion to generics, banning people, going after them even outside of community spaces. But on a technical side, maps (and slices and channels) were not gated to be used by std, they were publically available to anyone. Not having generalized a feature is not the same as banning it. There was not even Go syntax to express it. Same as arrays in C, no?
That said, I’m not challenging the recommendation to stay away or not - generics was (and still is, may I add!) quite a pain point with the language. I’m personally quite invested for other reasons (concurrency, networking, std lib), but people come to different conclusions naturally.
If you want to try TEA, but not Elm I reccomend Scala.js with Tyrian[1]. Scala.js is a wonderful, mature project and Tyrian gives you the elm architecture in a very pragmatic way.
Are you actually using this in a non-trivial application?
The recurring complain I hear about scala are the bad compile times. I haven't used the language much, so not sure if this only applies to libraries that heavily use compile time metaprogramming.
But I really love that with modern tooling we can get a sub-second editor->browser feedback loop even for a three year old medium-large project on modern hardware. This was primarily one of the reasons I avoided Kotlin+Gradle JS target because among other issues the feedback loop was 2-3x slower.
I have a medium sized project with it and the initial compile can be slow, however every recompile usually has the page updated as I switch from my editor to it. Not as fast as TS, but worth it for the much better programming language. Tyrian also makes it trivial to set up hot-reload with preserved state.
Yeah, I just read this further down in this topic. Really bummed about it, Elm always seemed so promising, and I thought a healthy fork was what was needed.
I just don’t understand the reasoning for this choice.
along those lines, Zokka is a fork of elm that appears to be mostly dedicated towards bug fixes that Evan (the creator of elm) refuses to acknowledge or merge.
edit: there's also Roc, https://www.roc-lang.org/, a language started by Richard Feldman who I believe was a former elm core team member. I think Roc aims to accomplish different things than elm, but definitely feels like a spiritual successor
> making it impossible to actually use the language in production
Just FUD. I've been on a big team writing a webapp used by hundreds of thousands each day. While it's not necessarily my own first choice, it was great and the least error prone piece of software I've written in my career.
It is impossible if you're being responsible. You don't choose a technology that could potentially block you from solving problems in the future unless it brings a huge value to you.
Elm's value proposition is mostly being a functional language with an opinionated MVU library baked in, so you can reproduce that value with a better functional language and selecting a similar MVU library in that other language, which means it should never actually cross the value bar above the risk it brings if you need a browser feature it doesn't support and actively prevents you from accessing.
I am just clarifying why I consider it impossible.
Production is not some place you're supposed to cowboy code, but instead have a reasonable expectation that you will be able to continue supporting it for as many years as it operates, and it's impossible for anyone to responsibly use technology with known limitations that have bitten other real engineering teams that they can find zero workarounds for.
If you don't consider that an impossibility for a production environment, then I certainly wouldn't want to work with you on a team with production responsibilities.
I've also had the pleasure of maintaining an Elm codebase. It was filled to the brim with state update bugs. You could never trust what you saw in your browser. Nobody in the team understood how the codebase worked. I spent days implementing some extremely simple changes, which barely worked (to the same standard as the rest of the codebase). Never again.
I've seen the word FUD 4 times on this page already. The "elm defense force" sounds a lot like cryptobros defending their rug pull. It's such an odd piece of language to adopt over people not liking some javascript compiler for perfectly valid reasons.
To me it looks more like the elm-haters are out in force and the Elm users don't participate anymore on this site.
Many hateful comments here below a historic post prompted me to create a new account after a detox phase of >10 years.
So far I try to correct a false statement in the (as of writing this) top comment [1] or add a more neutral view [2].
And maybe I will add more of my personal opinion in the future - or participate in other interesting topics depending on my mood.
I’ve been working with Go for 10 years and I have no idea what you mean by maps being generic before generics came along, nor did maps ever cause me to have over-verbose code bases. The links didn’t seem to help.
Trying to summarise what they were likely saying: Not having generics makes code verbose because you end up copying and pasting your library code to make it handle different types. The complaint about maps being generic was that the Go team clearly saw a need for generics (as they implemented them for maps and some other types) but decided that others wouldn't need them. So they had one rule for them and another rule for everyone else, which people don't like.
So when something is generic, it means it is a type parameterized by other types. So the type map[string]int is indeed generic, but no language users could create their own type btree[X]Y, for example.
Essentially, the go developers saw a need for generics and then decided that only they get to create them, where most modern language developers either make them available for everyone to implement or don't add them at all.
Is anyone else kinda hoping that GPG/PGP loses enough respect in the tech community that something fresh comes along that really solves a lot of the UX and security issues they have? (Acquiring keys, rotating keys, identifying compromised keys, and most importantly either reaches a large enough percentage of emails sent that usage of it is not in itself an immediate flag to monitor or can be implemented as a side channel not directly including the signature in the email payload itself.)
I think it already has. Maybe not the underlying tech, but certainly for encrypt emails.
But I think the outcome of this is not "something fresh" but rather "giving up on the idea of encrypted emails altogether". We have far superior communication channels that are secure, easy and private today (Signal, Matrix, WhatsApp, iMessage); That problem is solved. Storing emails secure and encrypted is solved too, by providers such as protonmail, fastmail, tutanota and many others.
So what does GPG/PGP really solve still?
The one use-case that I strongly see it shine, is where mail-notifications are signed. I've only ever seen Kraken do this: Upload my GPG/PGP pub key in their web-interface, and have them sign and encrypt mails to me with that. It very much solves phishing and MITM on email-notifications (and password reset mails and such). Comes with two serious problems though: the UX on my client-end still sucks; e.g. I never managed to set-up PGP on my android client, so I cannot read their mails from my phone. "subject: Login attempt from X" body: garbled. And generation and rotation of the keys is as frustrating as ever. It certainly is not a feature for "my mom".
None of which are anywhere close to as ubiquitous as email. None of which are well suited to long messages. Only one of those (matrix) is federated. Only one (matrix) wasn't designed specifically for use on mobile devices. Yes, the others can be used on a desktop, but the experience isn't great.
Hold on, there's a sleight of hand here. You're trying to compare all of email against WhatsApp. Now, it's possible that if you took transactional email out of the picture, and just stuck with interpersonal communication, WhatsApp could beat email. But that doesn't matter, because in this discussion, the figure of merit is encrypted email, and on that metric, every single one of those platforms on their own roflstomps email for daily usage.
Only Matrix is federated, like email. I really enjoy sending emails without opening an account for each recipient or being subservient to 1 stack provider for 50 years, with all the inbreeding that entails.
> Now, it's possible that if you took transactional email out of the picture, and just stuck with interpersonal communication, WhatsApp could beat email
Everyone I know uses email. No one I know uses whatsapp. A couple of people I know use Signal. A handful use iMessage.
> But that doesn't matter, because in this discussion, the figure of merit is encrypted email,
Ok, let's consider one case where encrypted email is commonly used: reporting security vulnerabilities. Do you really think any of these would be a good medium for that? Do you see companies or other organizations putting a whatsapp username as the contact in their security.txt?
I do want there to be a more secure replacement for email. But most of the newer e2ee messaging systems can't really fully replace email.
Is encrypted email commonly used for reporting security vulnerabilities? It seems like increasingly, more reports occur via bug bounty programs, or are disclosed publicly by the researchers, or are just sent as plaintext emails to security@ or whatever is publicly listed. When I've found security vulnerabilities in somebody's code, I can't think of a time I ever thought about GPG-signing my notice to them.
Yes: I think Signal is drastically better for reporting security vulnerabilities than email. I think if you're actually worried about operational security for accepting vulnerability reports, using email is practically malpractice. The fact is, most security teams, even the very large ones, are not especially concerned about operational security for inbound vulnerability reports.
From a security point of view, absolutely. But there are logistical problems. Currently, a signal account has to be tied to a cell phone number. How does that work when you want it sent to a team instead of an individual? There isn't a sanctioned API, so it is difficult (and unsupported) to set up an integration with bug tracking software. Not to mention that the reporter may not have Signal set up yet.
Most reporters don't have PGP set up, either --- far fewer than have Signal set up. But this is all kind of a moot point: the industry norm is to use plaintext email, and to make ad hoc arrangements (including voice calls) for the very rare cases where things are too scary to email.
Honestly these seem like pretty minor issues compared to the task of properly managing a GPG install.
How do you manage the keys? If you've shared them with a team, how do you ensure someone hasn't taken a copy? What if the key is lost? What if someone ends up replying to the thread without doing the encryption song and dance? It's just such a pain. I'd rather copy and paste something out of Signal and into my bug tracker a thousand times than have to deal with all the footguns of email encrypted with GPG.
>The fact is, most security teams, even the very large ones, are not especially concerned about operational security for inbound vulnerability reports.
The fact they know no-one who uses WhatsApp is a giveaway of their demografy as well. In many countries "not having WhatsApp" equals "not participating in anything". In my country everything, from my insurance help desk to the coordination for a friend's birthday gift happens on WA.
Despite my reluctance to use Meta projects, I read and write far, far more WA messages per day than emails.
I think we agree[0] on this. Email encryption isn't ever going to be a thing because of the way email itself works. But email signing would help a lot. I still don't think GPG does this very well, though, because of issues with key rotation/invalidation/etc.
I wish it would just use TOFU ("trust on first use") by default. It's not 100% fool-proof, but actually does cover a large number of use-cases, and is certainly better than nothing.
UI:
"billing@paypal.com: we never seen this sender before, be careful"
"billing@paypal.com: this is verified to be the same sender"
"billing@paypal.com: ACHTUNG! THIS IS SOMEONE ELSE"
You can of course still manually add keys, and you can even do automatic or semi-automatic key rotation with some new header (e.g. "X-New-Key: [...]" that's signed with the old).
> You can of course still manually add keys, and you can even do automatic or semi-automatic key rotation with some new header (e.g. "X-New-Key: [...]" that's signed with the old).
Headers aren't part of an encrypted or authenticated body, so this is trivial to perform a key replacement attack against.
Sorry, is this a rhetorical question? I thought the fact that SSH does TOFU was (somewhat) common knowledge, which is why it spits out all kinds of scary MITM warnings when a host fingerprint changes.
If you're connecting to an SSH server for the first time and don't already have a pre-established host fingerprint, then yes: someone who controls your server's DNS records can redirect you to another SSH host, which you'll then (presumably) enter your password into.
> which you'll then (presumably) enter your password into.
One of the many arguments for using pubkeys so that's all they'll get. Neverthless, the rest of the session could still be anything, and agent forwarding should never be used for untrusted hosts.
I'm not sure what you were trying to say here about telegram, but they are completely unencrypted, for nearly all intents and purposes messages are stored in plaintext on the server. They just succeeded impressively in twisting that fact away via marketing.
There have been many attempts, usually formed by ignoring the inherent difficulty of creating secure communications while fundamentally being the exact same UX as GPG. It's genuinely incredibly tedious to see there is a new "alternative" and find out once again folks are over-promising and under-delivering.
> Is anyone else kinda hoping that GPG/PGP loses enough respect in the tech community that something fresh comes along that really solves a lot of the UX and security issues they have?
This already exists. It's just not a single, all-in-one tool - the answer is to use a tool that's fit for the specific purpose you're trying to accomplish (secure messaging, encrypted backups, encrypting application data, etc.)
There is not, and never will be, a modern one-size-fits-all approach because the entire industry has moved on from that model - over the last 30 years, we (collectively) have learned that it's inherently insecure to design a tool to do so many disparate things and expect people to use it correctly each way.
All of these use cases are encrypting files, some of them with a few extra steps as sauce. Stuff like whatsapp / signal is the exact UX that GPG has, fixed by instead ignoring everything that's hard (trust). The asymmetric cryptography is not fundamentally novel or interesting, and the end result of it could be applied to literally anything if they allowed you to touch your own things (which they don't). These modern solutions are build on the infantilisation of their own users, nothing else.
That's exactly the wrong way to look at it. Everything is potentially a file, but not all cryptosystems have the same use cases. The needs of message encryption (forward and future secrecy, for instance) are not at all like the needs of backup encryption (deduplication, for instance). This is one of the biggest things wrong with the PGP model of retrofitting cryptography onto problems, and why it has virtually never been successful at any of them.
I'm just saying, I've used and tried to get people to practice WoT with OpenPGP on the order of a decade or more. There simply isn't a demand for protected communications because normies haven't started suffering at the hands of government for online shenanigans yet.
After a while, you sort of want bad things to happen so society will move forward... Humans are so reticent to act to protect themselves from a threat they don't see or understand the capabilities of.
normies use encrypted communication way more than the cumulative historical usage of GPG or PGP; Signal, WhatsApp, iMessage, and now Facebook messenger all offer better privacy for the average person than GPG or PGP ever did, and are used by orders of magnitude more people.
It doesn't pass the smell test. FB has been caught numerous times mishandling data. Apple is a walled garden you can't trust, same with WhatsApp. Unless Signal is liberally licensed, we can't verify privacy there.
Also, using a phone number as an ID is a de-anonymizing technique. Industry absolutely does not do this correctly, or they wouldn't be a juicy source of analytics data. Governments court these companies to exfiltrate data they have on people. If the data were adequately protected, they wouldn't be able to do this.
Sending something over HTTPS is encrypted but it's not private to the other end. Nice sleight of hand but I actually understand what I'm talking about.
I read about the WoT and I found the idea fascinating, but I've never used it myself. Would love to hear anything you have to share about using it in practice.
> Acquiring keys, rotating keys, identifying compromised keys, and most importantly either reaches a large enough percentage of emails sent that usage of it is not in itself an immediate flag to monitor or can be implemented as a side channel not directly including the signature in the email payload itself.
Kinda. But S/MIME has its own problems[0], mostly related to you as a recipient being unable to choose who is authorized to send you encrypted email (and so spam and malware filters don't work).
On top of that, GPG and S/MIME's support of encrypted-at-rest email is, imo, a fool's errand. Handing a payload of data to a third party that the recipient can eventually query to retrieve makes it much easier to grab a hold of and try to decrypt in the future. The same is true of SSL to an extent, but SSL traffic is much more voluminous, such that saving all of it to eventually crack and decide if there's anything worthwhile in it is unlikely.
The only real way to transfer private data between two users is to do it live with an ephemeral channel, whether that's in-person or via SSL or etc. The only value I see in GPG and friends is in verifying authenticity of the contents - signing the email - not encrypting those contents. Email has, and always will be, an open protocol, for better or worse.
> mostly related to you as a recipient being unable to choose who is authorized to send you encrypted email (and so spam and malware filters don't work).
That's a problem with all encryption anyways. Inspection has to be done at the end-user's device. So I don't think it's fair to hold that against S/MIME.
> On top of that, GPG and S/MIME's support of encrypted-at-rest email is, imo, a fool's errand.
If it can be done with E2EE messaging apps, sure it can be done with email. Long-term storage is a really difficult problem anyways.
> The only value I see in GPG and friends is in verifying authenticity of the contents - signing the email - not encrypting those contents. Email has, and always will be, an open protocol, for better or worse.
To some extent I agree. An ubiquitous deployment of digital signatures would already solve a bunch of problems and most of the rest are handled by transport encryption.
> That's a problem with all encryption anyways. Inspection has to be done at the end-user's device. So I don't think it's fair to hold that against S/MIME.
I don't think that has to be the case, though. Protocol negotiation is a thing; SSL negotiating the version of the protocol to use, HTTP 1.1 -> 2.0 negotiation, etc.
You could imagine a mail protocol that starts at an encryption level, then during the negotiation process when the mail-to-be-delivered provides a public key, the recipient server can check that key against a listing of keys accepted by the user, and if not included, attempt to negotiate down to an unencrypted version of the email.
The sender of the email could choose to not allow that downgrade and get an undeliverable mail error, or they could choose to allow the downgrade to plain text/html email. This could then be run through the standard spam/malware filtering as usual and the spam eliminated, while email that came from already trusted email can skip those filters because the user has already judged them worthy of accepting and keeping the communication private.
So I don't think that's an intrinsic difficulty of all encryption schemes for email, but...
> If it can be done with E2EE messaging apps, sure it can be done with email. Long-term storage is a really difficult problem anyways.
So first I'll state that I don't think all E2EE messaging apps reach the following bar, either, but the difference between an ephemeral SSL-encrypted communication channel and an email, fundamentally, is that the ephemeral channel won't be written to a disk somewhere, while the email will.
The window in which it is possible to get a copy, and the difficulty in obtaining it, is much more in favor of secrets staying secret in the ephemeral channel than it is in encrypted email. The data payload persists longer, and is likely encrypted by the same private key across many emails, so getting the emails and getting the keys are much easier than with the ephemeral channel that generates a temporary set of keys on each connection and never persists any of it to disk (so storing the communication with the hope of eventually grabbing the keys from the user's machine by virus or social engineering or just plain ol' physical theft doesn't even make any sense the same way GPG-encrypted email does).
When some muas announced Autocrypt support a few years ago I got excited again, but unfortunately nothing came of it. I havent been able to auto-update my key on my partner's mua, it seemed 'support' meant different things to different projects but none focused on the goal: making PGP encrypted email usable (and more secure).
There was a year or two where I actually began to receive autocrypt headers from random people and started encrypting with them.
Unfortunately, this momentum was completely killed by thunderbird, who decided to remove support and proceed to reimplement traditional manual email encryption :(
> something fresh comes along that really solves a lot of the UX and security issues they have?
I'm working on this! It's called Stamp (https://stamp-protocol.github.io/) and takes a lot of the issues I've had with PGP/GPG and creates a more modern refresh. It's definitely not simple but my hope is that having sane defaults and writing good interfaces will help with this.
Unfortunately it just went through a rearchitecting and the docs are horribly out of date, but the basic concept persists. In the current version instead of having different hardcoded key types (alpha, publish, etc), there's now the concept of "admin keys" and "policies." Policies decide what keys can do what as far as managing the identity, so it's possible for instance to have a policy that gives a key god powers, or a policy that sayd "if three of these four signatures match, the entire key and policy set can be replaced" (aka, multisig recovery machanisms). Also, in the current version, "forwards" have been entirely removed and replaced by claims.
The goal is to use this as a means to act as an identity in p2p systems. My issue with p2p systems is that they always punt on identity, making it a function of your device and some randomly generated keypair. That said, Stamp can definitely be used more generally.
Right now I'm focusing on the underlying network that syncs identities between devices and also stores identities publicly, circumventing the need for keyservers and all that stuff.
I'd love to see something displace GPG, particularly the "pipe to an external application" usage paradigm that's just awful in so many ways. At the same time I'm not ready to give up the only cryptosystem that we know to have actually thwarted the NSA in real-world conditions, particularly when so many of the proposed alternatives require absolute non-starters like publishing your phone number or using an unverifiable auto-updating client application.
But fundamentally there's no money in it. GPG is Like That largely because it's maintained by a grand total of one (1) guy, which is all the OSS community can afford to fund. Who's going to pay for a crypto platform, except when it's the CIA (or equivalent) sponsoring useful idiots?
That's just security-by-obscurity and doesn't actually buy you anything except a speed bump for a hacker. It was a bogus argument from proprietary software vendors against open source a couple of decades ago, and it is a bogus argument for web services, too.
The presence of an error at all is a tell for the hacker as they search the surface area of the service's API, making the wording unclear is simply anti-user (sometimes quite literally when these errors are used as part of anti-fraud measures and shut down accounts without informing the user of what they even did wrong).
I mean showing the exact path to the configuration file likely isn't a good idea, so there is likely a mix of user friendliness and avoiding information leakage.
The messaging to the user should be actionable, so the exact filename going to them doesn't make any sense, but giving them a clear sentence of what exactly went wrong and what should be done to fix it if possible, or a UUID (or "Guru Meditation" value if you're feeling old school) to give to the helpdesk that can then be used to look up all of the relevant information on the other side is reasonable.
We were talking about obfuscating what the user did wrong and giving them a misleading message to somehow improve security. Saying that giving them information they can't actually use (like the path to a configuration file in a proprietary web service) is what we're discussing is moving the goalposts.
I think this is probably taking the advice about not letting people on the sign-in window know if a username/email exists in the service or not (to determine whether or not it is worth spending the time trying a list of potential passwords for another user and access data they shouldn't) and expanding it without understanding the nuance. Before they have signed in you don't know who or what is accessing the login path and therefore there's much less confidence that it's a legitimate user. Once the login is successful and that auth token is being used, though, the confidence is much higher and obfuscating details of the relationship between the company and that particular user is pretty strictly anti-user since the user can no longer be certain if the implicit contract of services provided will continue as expected or not. (Couple that with network effects, migration costs, etc, and the relationship becomes even more lopsided.)
And you don't think anti-fraud teams use error logs triggered by users as a signal for potentially banning them?
These teams are incentivized to eliminate fraudulent accounts that cost the company money and are pressured/punished when their tools produce false-negatives (fraudulent users that are considered okay), but get no such pushback on false-positives (okay users that get flagged as fraudulent), and accounts that are triggering errors in the backend service(s) can look a lot like someone actively trying to hack it. Basically any sort of anomalous behavior that correlates with negatives for the business get flagged by these tools, and doing so unjustly is not an explicit goal, but it isn't really punished within the corporation.
(The false-positives do get negative feedback in the rare instances when it blows up on social media, so these teams often include a whitelist of high profile accounts to just skip over but still impact the regular users capriciously, only "solving" the false-positive problem insofar as it impacts the business.)
Netflix and Disney+ trained me to not even watch a show until it's concluded because it could get cancelled and I don't want to invest my small amount of free time on entertainment that might not even finish. It does produce a self-fulfilling prophecy where people with the same mindset as me on this do the same, and then the rating for something I (and probably they) are interested in aren't high enough and it gets cancelled.
What should worry them, though, is that it also led to the final step for them; I cancelled my Netflix and Disney+ subscriptions with no intention of renewing them around a year ago. The end result is that "TV series"-style shows are effectively dead to me; I've shifted my time on them mostly to novels (that are basically behind-the-curve on this trend, hopefully forever), followed by single-player video games, and finally movies. (Why didn't movies take the first slot? Because I'm only willing/able to give 30-60 minutes of continuous time to entertainment most of the time, and it's very unsatisfying to pause a movie to resume later.)
The continuous, immediate feedback on series performance coupled with a reputation of acting on that feedback immediately is killing the traditional television medium.
On top of all of that, Apple TV+ has the added albatross of requiring their hardware for the shows, as if they were somehow a siren song to get people more tightly nestled into their ecosystem, and therefore dooming their shows to failure, at least amongst people who don't want to pay for overpriced hardware running software of degrading quality over the years (I switched to Linux in 2016 because it was more reliable than my MacBook Air; being better than Windows isn't good enough anymore, especially when Linux has a greater catalog of software these days).
The needs of Apple, Inc weigh on their Apple TV division, they don't help it, and the sins of the streaming services against actually finishing a story further increase the trust deficit with Apple TV+. No amount of marketing is going to turn that around.