Hacker News new | past | comments | ask | show | jobs | submit login
Age: A simple, modern and secure file encryption tool (github.com/filosottile)
466 points by nahikoa on Dec 27, 2019 | hide | past | favorite | 197 comments



This is great, it fills a gap that I've long defended as PGP's remaining legitimate use case. Of course, now we just need to work on the adoption problem. Despite the fact that PGP adoption is a well-known joke in itself, all the tools designed to replace it (with the exception of the IM space) have somehow managed to achieve even lower use rates.

It's been almost five years since Magic Wormhole first released, and about half a year since that popular Latacora post recommended it for transferring files, and said "Someone stick a Windows installer on a Go or Rust implementation of Magic Wormhole right away". Guess what you're still not going to find a reliable Windows build (let alone a GUI) for? Yep that's right. Despite the fact that most of these projects come from a felt need for better alternatives to PGP for the average user, very few of them have actually come up with a product that's more accessible to the average person.


> about half a year since that popular Latacora post recommended it for transferring files, and said "Someone stick a Windows installer on a Go or Rust implementation of Magic Wormhole right away".

I read this and then went and ported wormhole to Go: https://github.com/psanford/wormhole-william. There's no Windows installer but it's pure Go so building on Windows or cross compiling for Windows is easy. Besides Windows I also want to support iOS and Android (I have a very rough working react native frontend right now).


FWIW I'd be much more comfortable recommending Magic Wormhole if the default was tweaked to give bad guys only say 1-in-2^32 or worse chance of success.

It's roughly the same reasoning as for your Windows GUI argument. This tool is now very suitable for people who understand what it does, but it is not yet well adjusted for users who lack that understanding.

Today - when most Magic Wormhole users can probably explain what a PAKE is - if you attack a Magic Wormhole transfer and cause errors (by guessing wrong) those users will react by increasing the length of the Wormhole code. But if we popularize it without fixing this default, do you think my sister knows to do that?


But in a certain way this thinks are aspects of the magic wormhole CLI not the underlying tech.

It should be trivial to increase security on failed attempts or use a higher default security for an GUI frontend.

The CLI is clearly meant for somewhat technical versatile users (I mean it's a CLI) so I think it's normal to do some aprons when targeting other user groups. E.g. adding explanations over some aspects atonal to the thinks I already mentioned is quite doable for a GUI.


There is now a solid Go implementation of `wormhole` (it's my daily wormhole driver). It works on Windows. It just needs a UI.

Since PGP has almost no serious real-world adoption (search your feelings; you know it to be true), it's wide open for replacement. People should use `wormhole` for file transfer in preference to `age`-encrypted files, if the only reason they're encrypting is to get the file safely across the wires.


> it's wide open for replacement.

Totally agree there, but I'll remain skeptical until I actually see that adoption start to happen. Certainly it's not going to until there's a nice GUI. (It's kind of sad, actually. Wormhole has such a nice TUI that would be utterly trivial to wrap in a simple QT interface or something.)


Right? RIGHT? I keep saying: everyone I've ever taught `wormhole` to does the same thing I did when 'lvh showed it to me: immediately and gleefully wormholing everything. It's such a great tool; the good people of the world deserve it.

It kills me that so many UI-type people build new encrypted email systems and nobody works on putting solid UI on cool-kid crypto like `wormhole`. It's such a high-impact project and it's missing exactly the skillset these people are strongest with. I mean that sincerely: as I think is obvious to everyone, crypto people can't do UI to save their lives.


Do programs exist that generate the most basic portable GUI (in Qt for example) from command line application?

Basically what you need is list of parameters, their types, allowed ranges, preferred way to modify and input parameter values (file selection, input box, slider, ...). Then button to run the program.

EDIT apparently there is https://github.com/chriskiehl/Gooey for Python.

EDIT2 Hmm. maybe fbs is good enough https://build-system.fman.io/


One can literally use a drop-in replacement for argparse like Gooey[1] to get a reasonably good GUI for the Python implementation.

IMO the challenge is mobile clients for iOS and Android.

[1] https://github.com/chriskiehl/Gooey


What operating systems Gooey supports? Weird that they don't mention that anywhere. Is it only win and osx?


It’s based on wxWidgets so I assume it’s supported (at least roughly) wherever wxWidgets is supported. However, I’ve only used it to package a few small utilities for friends, and neither myself nor any of them run Linux Desktop, so I can’t be sure.


Flutter desktop can help here.


Since PGP has almost no serious real-world adoption (search your feelings; you know it to be true)

Checks...it's not true. Maybe the original email use case never caught on, but that's not the only one. For example, PGP is a standard way to transfer Visa, MasterCard, or Diner's Club credit card transaction files. We have thousands if not tens of thousands of entities transferring PGP encrypted files every day, and we get new requests for PGP enablement on a regular basis. This is a deeply embedded business process (even embedded in many corporate financial systems like Oracle Financials), and it's not going away any time soon.

Other use cases...yeah, PGP should go away.


Not only should PGP go away for that use case, but it easily could; very few people would need to be convinced to upgrade it to a better format. What's held that back from happening is nobody agreeing on what that better format is; it's the same reason we're only now getting WireGuard after almost 2 decades of IPSEC VPNs.


Not only should PGP go away for that use case, but it easily could

Says someone who has never had to do it.

very few people would need to be convinced to upgrade it to a better format

Only the tens of thousands of current users who I personally have who would see no reason to change something that currently works and is secure. I have, in fact, suggested a number of better solutions over the years.

Hell...it took us 10 years to convince all the third parties that plain FTP was probably a bad idea. And there's still a tiny handful of very, very large companies that still say 'meh' and force us to keep an FTP server around.

Must be nice to not have to deal with real customers.


Is there someone you know with a similar name to mine that you think you're talking to? The kinds of issues you're talking about are my actual full-time job.


Oh, my...don't you know who I am?. Classy. I guess my aversion to being Internet Famous makes me easy to condescend to.

My "actual full-time job" is building and operating security teams for Fortune 1000 sized companies, not startups. These kinds of issues are also what I do every day. I just do it with far more customers, internal stake holders, budget, technical debt, politics, employees, governance, geography, etc., etc. And I actually do those hard things; I don't just say "you should do this...it should be easy".

Consider that just maybe your perspective doesn't represent the totality of the security landscape. Things that are easy when you're consulting to the latest Foo of Bar startup or whatever is spooling out cat videos this week are very, very hard when you're dealing with entrenched, interconnected business processes processing billions of dollars of other peoples money. Just a thought.


I assume the go wormhole implementation you are talking about is https://github.com/psanford/wormhole-william . I've been working on building a mobile interface for it. After that I may look at doing a Windows UI.


Does the security of using magic wormholes depend on using a trusted relay? Should I be running my own relay?


The security doesn't, but availability might: the relay is very easy to DoS (not even DDoS!). This is the one thing that I think Thomas has jumped the gun on with his recommendation: the current protocol and infrastructure won't survive a DoS attack, which is tragically likely if magic-wormhole's popularity increases. Brian Warner is aware of this and has written about it.

https://github.com/warner/magic-wormhole/blob/master/docs/at...

https://github.com/warner/magic-wormhole/issues/107

https://github.com/warner/magic-wormhole/issues/150

I noticed the same issue as Joey Hess, except a couple of years later; then when I was giving a magic-wormhole demo, some of the audience members accidentally broke some of my live transfers in various ways.

(Otherwise, I feel just as gleeful as Thomas does. It's an awesome tool and fun to use!)


Isn't this basically what syncthing is (same functionality as wormhole, with gui)?

I also thought I've used a gui dat protocol client to transfer files but maybe it was only in the terminal.


Which is the Go implementation that you recommend? There seems to be a few.


> It just needs a UI.

Would you consider an Electron app adequate? ;)


200MB binary consuming 300MB of memory and 1% CPU at idle, hell no...


I asked the guy who wrote "JavaScript Cryptography Considered Harmful" if a JS app would be adequate followed by a winky face.

The /s was implied.


The document that says right at the start that it's only talking about browser Javascript crypto, and not standalone?

Not that the crypto itself would be handled by Javascript in an Electron app in the first place...


Yeah I did see the sarcasm.


Seriously I'm just fine with Electron applications. It's 2019, and that only for a couple more days. We have RAM, this is a good thing to burn it for.


2 billion individuals on this planet are unable to afford a shiny new computer.

"I don't mind about resource usage" is a proxy for "I don't care about those people"

Also, many devices do not allow RAM upgrades.


If I wanted to "burn" my RAM I would have just bought less.


That's missing the point. $2 of RAM in exchange for security is a good deal. Go ahead and use the command line version yourself, but if you're not going to contribute to a native GUI version then don't be a party pooper about Electron.

Electron's only a problem when there's lock-in of some kind.


I think that people who say this think "yeah, running one Electron app is no problem". The problem comes when everything is Electron and now, like the ocean, you have no memory.


You don't need everything open at once.

And you can fit a whole lot of Electron apps into an extra 8 or 16GB.


The newest baseline Dell XPS 13 (not even Inspiron) comes with 4 GB of memory by default. The very idea of an extra 8 GB is hilarious because manufacturers haven't kept up with the times (my laptop purchased 9 years ago had 8 GB), and by extension the median consumer's laptop hasn't either. Nobody has an "extra" 8 GB outside of people with the money to just unthinkingly buy the top of the line model or developers who know what they're doing.


We're talking about buying extra memory, not defaults. And while an XPS 13 is trying to be a macbook, on an XPS 15 you just pop open the back cover and shove more memory into it.


Sadly you do need a lot of things (such as chat programs, an email client, a browser, and potentially an editor) open at once.


Nope. Many laptops can not have more than 16GB. And you have to pay hundreds of dollars even for that privilege in many situations.

An electron app will hurt adoption, give the project a bad name and reduce the likelihood of a proper UI to never materialize. Please don't.


Never mind 16GB. There are quite a few 8GB or even 4GB laptops (4GB ones are usually >= 4yr old, but it's not uncommon to keep crappy laptops for that long outside the tech-savvy circle) among my friends and family.


I'm just replying directly to the "I would have bought less" comment, which implies having control over your memory amount. It's not hundreds of dollars when that happens. You can get all sizes of laptop memory for under $4 a gigabyte. And if you've maxed out at 16GB, you're not really at a spot where .3GB is a big burden.

> An electron app will hurt adoption, give the project a bad name and reduce the likelihood of a proper UI to never materialize. Please don't.

As opposed to having nobody use it in the first place.


> $2 of RAM in exchange for security is a good deal.

1. I don’t know what RAM you’re buying, but mine costs way more than that. It’s especially bad if you’re using AMD.

2. How does using electron apps make me more secure than using native apps?


> 2. How does using electron apps make me more secure than using native apps?

The electron app exists. The native app does not.

You think that ranting against electron is somehow making a native app appear, but the reality is that there are way more js/electron devs than there are qt/gtk/wpf devs, and the choice in most cases is either an electron app or no app.


> 1. I don’t know what RAM you’re buying, but mine costs way more than that. It’s especially bad if you’re using AMD.

DDR4 at 3200MHz, plenty to make Ryzen happy, is available at $4 per GB on Amazon. That gets you half a gig for $2. But that's not even our max budget. If we use oefrha's estimate of 300MB of RAM eaten by Electron, then we have a budget of $6.80 per GB. That gets you very nice RAM.


If only it was $2 of RAM...

> in exchange for security

Electron is not famous for its security. See for example https://www.trustwave.com/en-us/resources/blogs/spiderlabs-b... and https://securityboulevard.com/2018/06/june-vulnerability-of-...


Those issues are only relevant to applications that display arbitrary HTML and already have XSS issues. Avoiding XSS is doable; with most web frameworks you're protected from XSS by default and have to specifically turn off the safeties to get XSS.


> Those issues are only relevant to applications that display arbitrary HTML and already have XSS issues

Such as signal! https://ivan.barreraoro.com.ar/signal-desktop-html-tag-injec...


How about flutter for desktop?


https://github.com/schollz/croc is written in Go, doesn’t need an installer and has worked reasonably well for me on Windows.


croc does not interop with magic-wormhole but tries to accomplish the same thing.

I made croc because it was really hard to install wormhole on Windows (especially for my non-dev friends). Also I wanted croc to support resuming transfers which has been stalled in wormhole for awhile now. [1]

[1]: https://github.com/warner/magic-wormhole/issues/88


Why doesn't croc use magic-wormhole's protocol ? Obviously its selling point is in the ease of installation, performance and portability, it's too bad that in doing those the compatibility is lost.


There's also https://github.com/psanford/wormhole-william. It's written in Go, doesn't need an installer and is compatible with the official wormhole client.


It does need binaries, though.


Fair. I'll get those added.


I published binaries for the latest release: https://github.com/psanford/wormhole-william/releases/tag/v1...


So how do I send an email securely without PGP? I've never been aware of any real alternative. The problem is just that people don't care enough to use it.


You don’t. But you don’t with PGP either: bad crypto primitive defaults, no header protection including From or Subject or Reply-to, bad typical UI showing up authenticated text/plain in with authenticated parts.

The case where I still use PGP is receiving reports of bugs from unaffiliated researchers, and I should replace it with a form on an HTTPS web site.


>no header protection including From or Subject or Reply-to

This is email, not IM we are talking about. There is no good way to do that without a lot of added complexity and hassle once the email ends up in your archive.

>bad crypto primitive defaults

If you mean forward secrecy then see the proceeding comment.


Its not forward secrecy. Its literally that the default ciphers and modes for PGP are a mess and it is so configurable that it is full of footguns.


I'm missing the email header issues, unless I'm trusting PGP public keys without any thought. If someone forges an email to me, has a PGP encrypted message in it, it doesn't validly decrypt unless my crypto discipline is already so lax I'm going to have issues with any system.


Why is the problem with pgp? Everybody agrees? But why? It's secure and versatile?


PGP is secure and versatile. It's also somewhat dated and GPG is quite an awful implementation to interact with. It barely composes with anything that wants to abstract it's insane UI into something an average user understands. It has dated defaults and supports outdated cryptography, while GPG has very dated UI and refuses to properly support bindings so applications can safely interact with it.


NSA recommended people stop using ECC <384 bits (https://apps.nsa.gov/iaarchive/programs/iad-initiatives/cnsa...).

There are applications where the extra time and space of something like ed448 present uncomfortable trade-offs.

File encryption is not generally one of those applications.

So I find this a little disappointing.

But I suppose that NIST PQ will finalize in the not far future and this will get replaced by something that hybridizes with a PQ scheme. (I say replace because the expectation that a pubkey is something you can easily copy and paste doesn't really work with the PQ schemes you'd likely use with file encryption.)

What happens if auth fails part way through the file? Do you get a truncated decryption on stdout? -- or is this buffering the whole input in memory?


Answering my own question: The reason it would continue to use 255 bit ECC is because an objective is (ab)using people's github ssh authentication keys.

FWIW, if the idea there is that you'll be able to send encrypted reports to github users based on their ssh keys... that might not work so well in the long run esp for security conscious projects, since good practice would have their github ssh key living in a keyfob that won't decrypt messages for them. :)


Native age keys are pure X25519 with no connection to SSH keys. SSH support is kind of a growth hack, I made sure it didn't impact the rest of the design.

Recipient types are the one parameterized thing in the spec, so if we need to switch to Ed448 or a PQ hybrid at some point we absolutely can, without even bumping the version.


I was ready to use this in a project but “made encryption weaker because GitHub” is not exactly high in its selling points


It's linked right from the spec: it's a streaming AEAD construction, and getting that is literally one of the motivations behind the tool. It does not buffer the whole message in memory, or fail to detect truncation.


I think it's reasonable to ask for clarity on this point. One might, reasonably imo, expect buffering. (And E2BIG if a size limit is exceeded.) You have to dive deeper into the spec than a genre savvy user should need to learn this.


While the underlying algorithm detects truncation/corruption, the specification does not describe how the command line tool signals this to the caller.

This is problematic since a caller needs to be aware of the need to appropriately handle truncated plaintext output. The readme needs to warn about this pitfall.


I think that you misunderstood nullc's question. They are asking what happens if at some point one of the poly1305 MACs in the file is incorrect. Not if someone truncates the file.


We're talking about the same thing.

I saw that it used a streaming AEAD, but that's actually what inspired my question.

Since (from the github page) it reads stdin, it can't two-pass the file.

So it appears that if you hand it a file with midstream corruption it's going to feed a truncated input down your pipeline.

That has consequences. They may well be less serious consequences than buffering a potentially unlimited amount of data in memory :), but it's useful to make the behavior very clear because it wouldn't require too advanced an idiot to make something that was exploitable on this basis.


It's Rogaway's STREAM scheme from https://eprint.iacr.org/2015/189.pdf. Are you pointing out a problem in the paper, or in some specific idiosyncrasy you see of how it's implemented here? If so: what is it?

The AGL post the spec links to directly talks more generally about the high-level strategy: you're buffering chunks of files. You're only ever releasing authenticated plaintext. If you're piping to something processing plaintext on-line, that thing might need to wait for the end-of-file signal before processing or else potentially operate on a truncated file (by some integral number of chunks). `age` is still just a Unix program.


My question was asking to confirm that it indeed will put out a truncated output when given a mid-stream corrupted input (and that it doesn't do something like buffer just to validate).

That behavior should be clearly documented, so that users can be advised that their pipelines need to safely handle that case.

> that thing might need to wait for the end-of-file signal before processing or else potentially operate on a truncated file

Exactly. The docs should say this clearly, or someone will manage to create an interesting vulnerability with it eventually. :)

Could go with a message the points out that encryption doesn't authenticate the source-- which is a not uncommon misuse that shows up with PGP, where people assume that the source is authentic if the input was encrypted, even where no signature is used. (the fact that corrupted input gives an "authentication failed" message might be particularly misleading)


It's streaming on-line encryption. That's literally the point of streaming encryption: not buffering whole messages. The rest of your point directly follows from "not buffering whole messages".


Indeed. And the readme and the usage output makes no mention of streaming, buffering, on-line, authentication, or anything related.

This is a potential security relevant behavior that most users-- who haven't written or analyzed tools like this-- would find surprising.

For those following along, I went and tested it-- since the behavior wasn't documented or clear from the code. If it encounters midstream corruption it truncates the output, exits with a non-zero return and prints some error text std stderr: "Error: chacha20poly1305: message authentication failed\n[ Did age not do what you expected? Could an error be more useful? Tell us: https://filippo.io/age/report ]"

If the input is truncated, it either does that-- or if the truncation is on a block boundary it prints "Error: unexpected EOF\n[ Did age not do what you expected? Could an error be more useful? Tell us: https://filippo.io/age/report ]" instead.

It's not a problem, but it should be documented.


This is a security footgun and a vulnerability waiting to happen, but bash is at fault, not age. age does the best it can do (while maintaining O(1) memory requirement) by exiting non-zero, but the shell swallows that if it's in the middle of a pipeline.


IMHO it's not that bad. It's actually quite usable, and reasonably easy to handle safely.

Use

  bash: set -eu -o pipefail
  # unfortunately pipefail is not POSIX
and some care when writing scripts. Possibly decrypt to a file first.

A proper and likely footgun would be decrypting and passing tainted plaintext and only then exiting nonzero. E.g.

  decrypt < file | sh  # owned
Definitely should be documented either way.


I agree with all of what you said.

The footgun you described can still happen if there's a verification error somewhere in the middle. You could still conceivably craft exploits using only truncation of the plaintext, depending on the situation.

No one should "decrypt < file | sh" (or anything | sh without verifying), but they will. Doesn't matter if we have POSIX or non-POSIX shell flags that can fix it, the defaults are bad.

There's nothing tools like age can do about that, though.

Edit: I was thinking more along the lines of

    if decrypt < file | postprocess > tempfile
    then
        sh tempfile
    fi
where postprocess exits zero. This is where the default shell behavior fails. The "decrypt < file | sh" antipattern is something not even the shell can do anything about.


> No one should "decrypt < file | sh" (or anything | sh without verifying)

I was thinking of self-prepared scripts, tooling or owner controller distribution. Decrypt+good signature is precisely what I want.

Anyway, as nmadden pointed out, age does not provide source authentication duh. AFAIU that means, all the streaming semantics and blockwise AEAD are practically useless, unless you are using the password encryption, which is helpfully blocked from automation.


> The "decrypt < file | sh" antipattern is something not even the shell can do anything about.

It could refuse to accept input from stdin if it's not a terminal.


> If it encounters midstream corruption it truncates the output, exits with a non-zero return and prints some error text std stderr:

Do you mean it releases output even if the encrypted file is corrupt or tampered with? Isn't this one of the issues in e-fail?


I think the problem with e-fail was that gpg would output data before verifying it. Age will only output chunks of data after they've been verified.


The fact that this is the point of streaming encryption does not preclude the usefulness of pointing it out explicitly. It eliminates a reasoning step by spelling it out, which is always a good thing for critical things, IMO.


Serious question: If you're not signing (age does not*), then what is the point of the AEAD STREAM scheme? By definition, nothing is authenticated, right?


Consider this attack.

You found a vulnerability in FooSmith and want to collect a bounty. You're keeping the vuln secret both for security reasons, and so no one else can jump your claim.

FooSmith has announced a bounty process where you can claim a bounty by sending an encrypted message with a novel vulnerability according to a specified process.

So you send a report using the mandatory bounty collection form, which starts off with a fixed position field "Bitcoin address to pay bounty to: <address goes here>".

I happen to know what address you're going to use since you posted it so everyone could see when you got paid. I happen to have write access to FooSmith's issue tracker. I xor youraddress xor myaddress into the stream at the right position, and tada thanks to the fragility of stream ciphers, esp unauthenticated ones: it decrypts to a different message that asks for the payout to my address.

Adding a digital signature to the encrypted wouldn't have magically made it secure: I would just rip that one off and replace it with my own-- FooSmith can't authenticate a signature here, the authentication is "common membership inside an encrypted message", and without authentication that can't work securely.

There are other attacks when the encryption lacks a auth. Imagine you run a network service that accepts encrypted messages and decrypts them then reports back various distinct result messages based on what the input decrypted to.

I have an encrypted message for your service authored by someone else and I'd like to learn about its content. Without auth I could start sending it to you over and over again, flipping bits in it to learn about the content. In some cases, when the planets align just right, this kind of bug lets you use the service as a decryption oracle-- you can get the entire encrypted message!

(Toy example: if the service reports the input in an error message, simply corrupting the first bit might instantly get you the content. But it can be much more complex and subtle than that.)

This isn't to say that you couldn't build a security protocol that didn't use authed encryption... you can, but without auth the encryption doesn't form a nice abstracted layer and much more of the application has to be analyzed from the perspective of cryptographic attacks. History has shown people fail to do this well, so authed encryption should almost always be used unless there is a really good reason why it can't be.


'nullc and I are talking about the same thing.


> NSA recommended people stop using ECC <384 bits

Since when does anyone care about NSA's opinion (who also don't have to care about FIPS compliance)?


> NSA recommended people stop using ECC <384 bits (https://apps.nsa.gov/iaarchive/programs/iad-initiatives/cnsa...).

Yet they are still fine with AES-128, even though it is objectively a weaker link in the chain. See https://blog.cr.yp.to/20151120-batchattacks.html


If you follow the link in my message you will see a table with "Use 256 bit keys" for AES.


Some questions about the spec:

1. How does age disambiguate between filenames and other key formats for the -r argument? (Those formats are also valid filenames)

2. Does the header use normal Base64 (i.e. +/) or url-safe Base64 (i.e. -_). The specification sounds like normal Base64, but some lines of the example contain -_ others contain +.

3. What characters are allowed in the header? ASCII only? (the current key-formats are ASCII only, but an implementation is supposed to skip unknown formats)

4. Are any characters forbidden in recipient types, arguments and additional lines?

5. Which strings at the beginning of a header line have special meaning and thus are illegal for additional lines? Only `-> ` and `--- `? I assume the space is mandatory in those strings despite the spec not mentioning that for `->`?

6. CRLF normalization of the header is only mentioned in the section about ascii-armored files. I assume it also applies to non ASCII armored files?

7. Is keeping the public key secret to achieve symmetric authenticated encryption an officially supported/recommended use-case?

(If the public key is public, the MACs block decryption oracles. However they don't provide any authentication, because the message isn't bound to any sender and thus an attacker can just encrypt their own message to your public key. If the receiver's public key is secret, this isn't possible and thus the current implementation provides symmetric sender authentication)

8. How does the command line tool signal failure/truncation/corruption?


Answering what I can. Where something is an implementation detail, I'm referring to rage (which I'm obviously more familiar with).

1. rage tests arguments for validity as filepaths, and uses the file preferentially over treating the argument itself as a recipient format.

2. The header uses normal Base64. This was changed recently, and the examples likely need updating.

3. rage currently rejects unknown formats; I haven't implemented this part of the spec yet.

4. Based on the current contents of the age specification, it looks like limiting to standard Base64 characters is consistent.

5. Additional lines all need to be standard Base64 characters (i.e. consistent with the format of current recipient lines) if implementations are going to be able to skip unknown formats.

(Recipient lines are currently under-specified in the spec. I opened https://github.com/FiloSottile/age/issues/9 a while back for addressing this.)

6. The normalization notes are an artifact of an earlier ASCII armoring format. Now that the armor is (a strict subset of) PEM, there is no need for CRLF normalization, as the age format solely uses LF, and PEM (which can tolerate either) is only a wrapper around the age format and thus does not affect the header.

8. rage signals this via an I/O error in the library that will bubble up through std::io::copy; this amounts to truncation on a chunk boundary and a non-zero exit value.


Is there a detailed description of the security goals and crypto rationale anywhere?

For example, it seems that if you use scrypt then you get fully authenticated encryption: the message must have come from somebody who knows the password (either a trusted user or you chose a weak password). But if you use X25519 then the scheme used is ECIES, so no sender authentication, only IND-CCA security.

The format document says that if you want “signing“ then use minisign/signify, but I suspect most people want origin authentication. We know that it is actually quite hard to obtain public key authenticated encryption [1] from generic composition of signatures and encryption, with many subtle details. It would be better if age supported this directly for X25519 as it does for scrypt. Unfortunately, you can’t simply use a static key pair to achieve this (as in nacl’s box) as age uses a zero nonce to encrypt the file key with chacha20-poly1305 so reusing a static key will result in nonce reuse. (This seems a bit fragile).

[1]: https://eprint.iacr.org/2001/079


An example of why I think this is important. Adam Langley’s post that is linked from the spec [1] talks about cases where people want to do things like:

    decrypt file | tar xz
Elsewhere in these comments somebody also mentioned the case of

    decrypt file | sh
Presumably the whole point of implementing the STREAM online AEAD mode is to support these kinds of cases; only releasing chunks of plaintext after verification.

But these use-cases are only secure in age when using the scrypt decryption option or if you have first verified a signature over the entire age-encrypted archive (killing the streaming use-case). The reason is that the X25519 age variant provides no sender authentication at all, and so an attacker doesn’t need to tamper with the archive: they can just generate their own ephemeral key pair and replace the entire thing with data of their choosing. Age has no way of detecting such an attack.

You absolutely need origin/sender authentication built directly into the tool to handle these cases securely.

[1]: https://www.imperialviolet.org/2014/06/27/streamingencryptio...


I wrote up some more detailed notes here: https://neilmadden.blog/2019/12/30/a-few-comments-on-age/


"it looks like we'll be ok" remains their status on the problem of whether it's fine to just take SSH keys and use them for something quite different.

That's just not good enough. It was fine in early drafts because there was hope they'd remember that "Solve all of the world's problems" was not their goal, and so SSH keys might be irrelevant in later revisions anyway. It's not fine in something intended to actually ship.

Either get somebody to put lots of work in to verify that yes, it's definitely safe to do this as SSH stands today, and contact SecSH WG or Sec Dispatch or whoever to make sure they know you're doing this now - or, as seems much more likely, rip out all the SSH key code and highlight that line about how you don't want to do key distribution in age because it's hard.

PGP is full of things its creators thought might be safe that you now have to tell people not to do because it turns out they're unsafe. This tool should not recapitulate their mistake.


I am fairly confident the SSH key reuse is fine, or I wouldn't have shipped it. But yes, it would be a misrepresentation to say there are formal proofs of it. There's no one I can think of that we can pay in short order to make robust ones. FWIW, we don't really have proofs for ECDSA either, and it's been almost 30 years. (Anyway, the core age flow with native keys is unaffected.)


Rust implementation: https://github.com/str4d/rage


Ooh, the included "rage-mount" utility allows mounting an age-encrypted zip or tar file as a directory. (It can be installed with `cargo install --features mount age`.)


This looks really good, I've used ccrypt (#1) for years as a simple Unix-y encryption tool to avoid the complexity of GPG (though this is symmetric encryption only so you need to have a secure way to exchange keys).

I just added a pull-request to allow the recipients flag to also be specified as a https:// or file:// URL - this is mostly useful to use the GitHub <user>.keys endpoint to grab user keys eg.

  ./age -a -r https://github.com/<user>.keys < secret
will encrypt using <user>'s GitHub SSH public keys.

#1 http://ccrypt.sourceforge.net

#2 https://github.com/FiloSottile/age/pull/43


Not that it matters, but age hasn't hit 1.0 yet. (Close, though!)

With that in mind, it's still really exciting. I can't wait until I never have to use GPG ever, ever again.


Can I ask a question that I've never been able to answer by Googling? Kleopatra is the tool of choice for GUI based GPG / PGP stuff on Windows right? So why is it that literally any software I download, it cannot locate the keys on any online database including MIT and whatever else are the top keystores online.

If those keystores are not being regularly updated by trusted data vendors, how am I supposed to trust Gpg signed stuff? It isn't like SHA where I just need to compare 2 hashes.

I'd shift to command line tools if I knew that the protocol was being widely used effectively.


Not everyone uploads their PGP keys to keyservers. Also, keyservers don't verify the ownership of the keys uploaded to them. You're supposed to import the signer's public key first.


Yes that's the other choice right? But then if I'm going to a compromised website with no idea that a MITM attack is taking place, I'd download the wrong public key wouldn't I? In that scenario, why is it trusted more than something much simpler like SHA? Is it just because it doesn't need a hash calculation?

So the larger question is, how do I verify ownership of a medium level distributed file? Like not tens of millions of users who host mirrors etc so that everything is cross checkable. But not like a 10 downloads a month software either.


A nice example is the number and names of keys for <president@whitehouse.gov>.


What problem is this solving that isn't already done by other tools?


This is like the engine underneath PGP, but modernized and with the misfeatures stripped out. You'd use it, instead of PGP (which is bad) for encrypting files, and as a building block for the operational tools that really are just straightforwardly encrypting files (ie: not messaging, which has its own distinct needs and has purpose-built cryptosystems for).

More on this: https://latacora.micro.blog/2019/07/16/the-pgp-problem.html


There seems to be quite a few light contradictions (or perhaps just varying views exposed) in both the post and the way it's referenced here.

I assume that OP's question implied that there generally are downsides to using separate tools (such as fragmentation, and then mostly UX ones: obtaining/installing them on all the machines that need them, managing keys differently, learning/using additional software, etc) when a task can be achieved with commonly available ones. But then the article criticizes GnuPG's UX, and suggests to use a bunch of different tools.

Then the article says "let's call both GnuPG and OpenPGP `PGP`", and proceeds to criticizing "PGP" standing for both GnuPG and OpenPGP.

Then it criticizes OpenPGP metadata leaks (possible attachment of a key to an identity), but suggests to use services such as Signal and WhatsApp (certain attachment of a key to an identity via a phone number, AFAIK). Or the ones using similar algorithms (I've only tried OMEMO out of those myself, which led to messages not even being shown in IM clients, apparently due to implementation inconsistencies).

Then it goes on suggesting to not encrypt email. I guess it's implied that one shouldn't use email for secret data, but a much more common practice seems to be actually using it for secret (but not "life and death" kind) data, and sending plaintext passwords and such; using PGP would still be a step forward. Perhaps it's the contrast between such criticizm (both here and of various other technologies) and common practices that makes me rather skeptical about the former: we can do better than X, but not doing even X.

WOT/PKI criticizm is present there too, but the suggested software either doesn't do/need it at all, or relies on a safe channel and direct verification (which is usable with OpenPGP as well).

I'm not advocating use of OpenPGP for everything, but finding those arguments to be rather strange.


I'm sure there was a question in there somewhere, but I'm not seeing it. I'm the author of the article that you're responding to. I'm happy to answer questions that I can parse as such.

You can do a lot better than OMEMO. Just use a serious secure messaging application: Signal or Wire are both fine options. Virtually every secure messaging application, including OMEMO, is better than attempting to make email cryptographically secure.


I didn't have a question, though perhaps not quite seeing where those advices are coming from (what are the threat model and underlying assumptions) can be stated as a question, as well as the definition of "better" here. For instance, phone number exposure and centralized systems (in case of Signal) or unreliable message delivery (in case of OMEMO implementations) seem rather bad to me, while properties such as deniable authentication seem to be useful in rather specific and rare cases (they still wouldn't harm if they were better supported though). It's also challenging to use OpenPGP, even with widespread email usage and the standards being around for a while, since people rarely care about encryption, and the most common case (AFAICT) is to send just plaintext emails with private/secret data. Given that, it seems counterproductive to advice not using it, but using systems with more obstacles instead. Do you view some of the properties they add as particularly useful in common cases, and/or as worthy trade-offs?


I've read that article before and really enjoyed it. I don't know if it answers the question, though. So, if I want to message someone I use Signal, if I want to send someone a file I use Wormhole, if I want to sign something I use Minisign, if I want backups I use tarsnap. Is this for "if someone compromises this machine there's still something else they need to do to access this file?"


Your comment on purpose-built systems makes me want to ask a question I've been wondering about for a while:

What would be the best way to encrypt something with a lot of files in it (like, say, a home directory), assuming you wanted to access it across the network on multiple devices?

Sorry if this question's annoying, it seems like something you might get a lot.


Encrypted LVM, or any other block-device-level method.


I admit I grievously misworded my query, but (something like) Magic Wormhole seems to be the answer to the question I was meaning to ask.


in practice "the best" and securest way of encrypting a folder is zip-compressing it several times with passwords.


Wait what, excuse me?

I don't mean to insult you or anything but how did you came up with that idea?


OK, the linked article talks about a downgrade attack and then uses that as an excuse to talk about a whole lot of OpenPGP stuff that no one actually uses anymore. But the article entirely fails to show how a downgrade attack is possible. I mostly just skimmed the article after that but did not see any real attacks even against the old stuff. So not really a strong argument against the OpenPGP standard.


From the linked blog:

> A Swiss Army knife does a bunch of things, all of them poorly.

Counterexample: the Phillips head screwdriver in my Swiss Army knife is actually the best Phillips head I've ever found. It can easily turn without slipping a wider range of screw head diameters and depths than any other screwdriver I've used.

(Does anyone else have way more screwdrivers around than they can explain? I cannot think of any reason I would own more than two or three full sized screw drivers, and one set of small of jewelers screwdrivers...but I've got more than a dozen full sized ones and a couple sets of jewelers screwdrivers. I cannot remember buying, inheriting, finding, stealing, borrowing and not returning, or being gifted any of them--but there they are. Glitch in the matrix?)


One thing about the Phillips screw driver - perhaps it's Pozidriv[0] type, that's compatible with Phillips but different/better.

"Phillips" was originally designed to slip to prevent over-tighten. One more PH. screwdrivers come in various sizes (read the listed article for more). Using the correct one works significantly better for: flat/ph/pz, etc. For stuff like torx is not even possible to use incorrect screwdriver. Last screwdriver quality greatly varies, with some brands being exceptionally expensive or even pride material to own.

[0]: https://en.wikipedia.org/wiki/List_of_screw_drives#Pozidriv


I went to replace the memory in a 2018 Mac Mini yesterday. I have small Torx screwdrivers from doing this older Mac Minis and MacBooks. I have bigger security Torx bits (they have a pin in the center so ordinary Torx bits won't work) that I bought when replacing the fuse in a microwave.

What I don't have are small security Torx bits or screwdrivers. Which is what you need for the 2018 Mac Mini.

I had bought the small Torx screwdrivers as part of a kit with small Phillips, pentalobe, etc. Major use case was replacing broken screens on Chromebooks when the kids were littler. Secondary use was replacing hard drives in Mac Minis (2011, 2012) with small SSD/1 TB combos.


I buy most of my tools from jumble sales, the older tools are the best and with a bit of restoration work better than any new screw driver. I always carry a pocket knife and knife sharpener like the one here https://www.gearassistant.com/best-pocket-knife-sharpener/ but rarely use the screwdriver on it


The spec document has some more information on the rationale and goals that might be of interest.

https://age-encryption.org/v1


This reference from that spec is especially useful as an example of the kind of nuts-and-bolts modernization that makes cryptography engineers tear their hair out about PGP:

https://www.imperialviolet.org/2014/06/27/streamingencryptio...


I found their “Out of scope” list very interesting. In particular, they list signing and “anything to do with emails” as non-goals. From my fairly limited understanding of cryptography, isn’t signing almost identical to encrypting (at least with RSA)? I don’t understand why this would not be supported in a self-styled PGP replacement.

I also don’t understand the “anything to do with email” line. Sending my public key to a recipient on an out-of-band channel and then sending an encrypted email should be completely agnostic to the underlying encryption tools, no?

I don’t mean to sound critical - I’m very intrigued by this project and would love to have a better replacement for gpg!


Age is trying to avoid being an infinitely-flexible swiss army knife like GPG. It's been argued that GPG's do-everything design is the root of many problems both technically and in usability: https://latacora.micro.blog/2019/07/16/the-pgp-problem.html.

Minisign is a good similarly-small tool for doing signatures + verification of signatures and nothing else. The cryptography is similar, but the use-cases are often different. The tools both follow the UNIX philosophy of trying to do one thing well. Blenders and belt sanders both operate with motors, but no one tries to combine them into one tool.

>I also don’t understand the “anything to do with email” line. Sending my public key to a recipient on an out-of-band channel and then sending an encrypted email should be completely agnostic to the underlying encryption tools, no?

Yes, you can do that.

Age doesn't want to involve web-of-trust, keyservers, key rotation, forward secrecy, post-compromise security, message repudiation, signing, email standards, and email client integration. All of these would be necessary for a good end-to-end encrypted messaging system. Try reading about the Signal protocol to see everything it does, and then try to figure out how to stuff it all into existing email systems. It's hard enough to get any of that stuff right even without the restriction of working inside email.


Please read this to stop thinking signing and encryption are related: https://security.stackexchange.com/a/87373/70830


They are related in the sense that encryption and signing algorithms use the same cryptographic operations. But the algorithms themselves are different, so having knowledge or faith in a certain encryption algorithm does not mean you have either in a related signing algorithm. They are separate domains.


> They are related in the sense that encryption and signing algorithms use the same cryptographic operations.

What is being encrypted in ECDSA? Or in Ed25519? You are wrong. Read the link.


tinco did not claim that anything is being encrypted in ECDSA nor in Ed25519.


I reacted to "isn’t signing almost identical to encrypting" and "encryption and signing algorithms use the same cryptographic operations".


I have a fun story to tell about how AGL and I couldnt use GPG to send each other emails about a vulnerability. For things like integrating with salt this is way easier then dealing with the busted web of trust.


> What problem is this solving that isn't already done by other tools?

It solves the problem of having to use the bloated monstrosity that gpg has become.


It’s not pgp


Age looks somewhat promising, but I am still looking for a reasonable alternative to TrueCrypt. Encryption should be as easy as possible. The process of gathering your files (plural!), put them into a folder, zip or tar that folder and then encrypt it, to delete the remaining files afterwards, is anything but not easy. Adding new files is even more horrible. TrueCrypt was so easy, just select the encrypted file, enter your password and voila, you got a volume mounted where you can easily add or remove many files. I know that Veracrypt exists, but it does not feel like a solution for the next decade(s).

Its super weird. There is this use case to de/encrypt a single file, but mass storage of files in a secure way and without a proprietary protocol seems impossible.


There are nice solutions on Linux: luks (encrypting partitions) and cryfs (encrypting directories)


There is also gocryptfs. It is written by some of the same people who did encfs and attempts to fix all the security issues discovered during its years of use.

https://github.com/rfjakob/gocryptfs


> and cryfs (encrypting directories)

Pretty sure that this leaks a lot of metadata.


No good without plausible deniability. It was that feature that got Truecrypt in trouble.


You mean like VeraCrypt?


I've been using VeraCrypt for a few years now, and have nothing bad to say about the experience - it's really easy to setup and operate.


rclone does that, you can mount an encrypted folder as a disc...


This is neat. A quick browse through the code, looks like it uses DJB's chacha and polyxxx underneath.

I've been waiting for a worthy replacement for "crypt" for a very long time, and gpg, while it can be coaxed into doing that with much effort has simply become a bloated abomination at this point.

Hope this gets vetted by the crypto community and gains popularity.


Not that I disagree but just a quick note that Filippo is very much a part of the cryptographic community.


So the modern alternative would be this for file encryption, and signify for signing. What's the consensus on an alternative for GPG authenticating?

And what are the expert opinions on themis: https://github.com/cossacklabs/themis ?


Themis looks like a more complicated take on libsodium, which is already the de facto standard modern crypto library. I'd use that instead.


We need a protocol/scheme that other things can adopt much more than a tool. There will always be a reason why someone can't use a tool,but with an encoding/scheme/protocol you can push for different things to use it.

To give an example, I was in a work situation more than once where an external party wanted to transfer files to or from our company and I was suppose to help find a standard tool/method. The only (and I mean only) way right now is pgp due to it's ubiquity with s/mime on email being second. We do need great tools like this but we need them to where if I can't use it due to license,policy,etc... Issues i can use a separate compatible tool.

So, My only suggestion to the author is to please make a fixed and versioned standard out of the scheme.


A protocol for what? To send messages securely? We have that. To do deduplicated mass backup? That exists, too. To securely transfer files? We have that as well. Each of these problems makes their own demands on cryptography and deserves their own purpose-built cryptosystem, which is why Signal looks like Signal, and not a naive message bus over which we push PGP-encrypted records.

The point of `age` is that when you subtract out all these use cases from PGP and leave just the file encryption problem, PGP still sucks, and sucks way out of proportion to how complicated file encryption is.

So instead of bringing all of PGP's bloat, 1990s cryptography, and misfeatures to bear on that simple problem, we just get a simple, modern tool optimized for that one problem.


> The point of `age` is that when you subtract out all these use cases from PGP and leave just the file encryption problem, PGP still sucks, and sucks way out of proportion to how complicated file encryption is.

For clarity: Is this an endorsement of `age`?


`age` is awesome. We wrote something almost identical to `age` internally at Latacora that has some features `age` doesn't have (encrypting optionally to KMS keys, and managing encrypted DMGs), and I'm going to kill that tool off and add those features to a local fork of `age` instead.


You'll be happy to know that the spec [0] existed even before the tool, and that there are already two implementations developed from the spec [1]!

[0] https://age-encryption.org/v1

[1] https://github.com/str4d/rage


Thank you, that's all my original comment was about. If the spec exists, I can for example write a client that works with it under a different license/language.


I once made something similar after reading a blog post here on HN [1] just to see how easy it is to make something like this. Mine [2] uses passphrases to generate keys with the Argon2 algorithm and then uses NaCl's secretbox for encryption. I also made a version for use on streams. It's not up to snuff for industrial use, but it's really easy to use if you just want to encrypt some files with a password and also very simple if you want to modify it for your own purposes.

[1] https://blog.gtank.cc/modern-alternatives-to-pgp/

[2] https://stutonk.github.io/crypt.html


A tool/protocol is as secure as the people using it.

I found that when working with non-techies that 7Zip is an acceptable encryption tool. It used proper encryption, it's open source, available on all platforms, available with GUI and CLI.


That’s no good if you need public key encryption


Asymmetric encryption is hard for non-techies.


> Out of scope:

> The web of trust, or key distribution really

Is there anything in the tptacek suite of replacement tools for this? Like Keybase but fully open source and/or decentralized?


For encrypting text files, I just use vim's `:X` command and enter a pass phrase. Simple, easy, portable, works everywhere. I have configured my .vimrc to `set cryptmethod=blowfish2` and disable backup/swap files for encrypted files. Are there any issues with this? Is there any other option that will work on virtually all UNIX devices without installing anything additional?


It's so weird hearing what people are doing instead of PGP, and how bad it is. I had no idea vim even had this feature, but from what I've discovered in about 3 minutes of Googling, vim's "blowfish2" is the 64-bit-block Blowfish cipher in unauthenticated CFB mode. Just awful: Blowfish is weak, and attackers can manipulate the ciphertext of your files. This is why you want `age`.


Trying to unfuck vim's encryption spawned a very interesting Github thread, if anyone is interested: https://github.com/vim/vim/issues/638



Damn, the Vim developers seem to be a particularly unpleasant group of people.


Welp, I'm switching to neovim.


Just… wow.


I learned about this vi feature the hard way—by accidentally encrypting something important with no idea about what I had typed in as the key.

Luckily for me, vi defaulted to Enigma encryption back then…


I normally use scrypt (with passphrase) for file encryption for personal backups.. age isn't competing with this use case i take it?


I'm not a big fan of tools that take encrypted data, and decrypt it by creating a decrypted file on disk. They give you a false sense of security. Files, even if they get deleted, remain on hard disks. On SSDs they are even harder to remove, as there is a complicated layer of indirection. Even if you shred the file before deletion, it's possible that it will keep being stored by the SSD, maybe even permanently if a block containing the decrypted content is being decomissioned.

The classical gpg based tools have this very same problem. The classical response is to suggest ramdisk usage, ideally for the entire OS (like a live system basically) to avoid getting artifacts onto the disk like clipboard history, cached thumbnails, or log files. pass for example uses such ramdisks. I disagree that this is a good solution though. Of course it is more thorough, but it requires additional intervention/setup, and not everyone has the needed expertise. Instead, I think the encryption tool itself should take care to only store the decrypted content in non-paged RAM, and give users read/write access through a GUI or a TUI. It should be a ready downloadable solution, similar to the TOR browser bundle. The TOR browser is also trying to not put anything onto the hard disk.


This tool is heavily setup to work with streams.

However, if your disk is exposed -- lots of other things, including shell history, swap, etc. may give you away.

The problem with stuff like "gives users read/write access" -- is that it presumes a narrow use case. What if you're encrypting digital audio? Source code? etc.

Should it also turn on your mic and try to determine if you're in a room alone? :P Demand you use an anti-tempest font?

There is only so much a tool can do. It's important that the tool does what it can within the context that it'll be used, but beyond that the best it can do is be clear about it's limitations.


> However, if your disk is exposed -- lots of other things, including shell history, swap, etc. may give you away.

Good that you point out shell history. You probably mean stuff like secrets being passed to age via CLI params? That's quite dangerous even if you put a space before the command which excludes it from your shell history. Any user on your system has read-access to the CLI arguments of every other process on your system. I've filed an issue upstream about this: https://github.com/FiloSottile/age/issues/37

I've mentioned swap in my comment. It's a problem indeed. On Linux you can prevent memory regions from being swapped out via mlock, but only to a certain limit if you are unprivileged (limit on my machine is 64 MB it seems). Windows seems to have a way as well. It's solvable in general and RAM is cheap. It's OSs that have to catch up. Even with the looming swap danger, your data is still more safe in RAM as it doesn't neccessarily get swapped while if it's on your hdd/ssd, it is almost certain to actually land there as well (instead of living in the RAM's cache).

> The problem with stuff like "gives users read/write access" -- is that it presumes a narrow use case. What if you're encrypting digital audio? Source code? etc.

That's a good point. Due to the point you made above (swap, shell history, etc leaking data to disk), it would be best if specialized tools handled the file, which are vetted to not leak any data onto the disk. You could think of a model where age is a library, the tools manually vetted with that in mind. Or you could think of a model where age is embedded into a runtime and the tools are sandboxed wasm modules without access to anything but RAM. Admittedly this is a huge project and one shouldn't expect age to be such a runtime.

A good stopgap would age enforcing best practices by checking whether the destination of the decrypted content is a ramdisk or not. I've filed an issue about this: https://github.com/FiloSottile/age/issues/36


Thanks for opening reports! I'll get to them soon.

Note that there are no secrets passed on the age CLI, for the reasons you mention. Only public keys, flags and file paths.

Being a file encryption tool, I think decrypting files to disk is core functionality with legitimate use cases, like backups and encrypted cloud storage.


What’s the point of decrypting a binary (non-text) file and not putting it on disk? What if I have to open it with photoshop?


The functionality should either be built into the OS, or into Photoshop. Photoshop should support decryption of the file on the fly. It should best be paired with a "private" mode where it stores nothing about your activities (most programs store some artifacts of your behaviour e.g. vlc stores last opened files or kate stores which line you last edited or something like that).


> be built into the OS, or into Photoshop

You're already setting yourself up for failure here, there's no way every single tool you're using will integrate encryption in them.

I agree that the OS should do better, but unfortunately that is not the world we live in at the moment so user tools are what is needed.


You can put the decrypted file on a RAM file system, if the size constraints are satisfied.


Spot on. We need better primitives to encrypt and decrypt stuff on the fly while accessing the disk.

BTW it's written Tor, not TOR


So.. this is using chunked AEAD, without source authentication/signing?

What's the actual use case, and why is it any better than plain stream encryption? If you wish to stream authenticated decrypted contents, it would mean 2 layers of chunking.


Is age designed to encrypt large binary files, which seem to increase the file size by 2KB.


Yes. age encrypts large files in 64 kiB chunks with 16 bytes of overhead per chunk, which strikes a balance between file size overhead and performance (particularly when seeking).


That's interesting and seem to be an alternative to encrypted zip or 7z which could be used in productions?


Which crypto primitives does this tool use? x25519+chacha?


+ poly1305 + scrypt


Finally! :)


How can we know there's no backdoor in it?


I don’t think it’s a bad question and don’t see why this is being downvoted.

Security can only scale via one’s network, and if you don’t have any it can be hard to figure out what’s secure and what’s not!

FWIW a little googling and you can see that filippo is pretty well known in the security/crypto community for positive contributions, same goes on for tqbf who’s all over this thread endorsing the tool.

I would also trust the thing without looking at it, but I might take a look at the code someday to see what’s going on :)


It seems you can read the source code?


It's OSS?


Yeah. So all you need is a 10+ year experience in crypto algorithms and weeks of close inspection of the code to verify it!


Sooo you need to trust someone that does have that experience to do the verification. What alternative are you suggesting? Is there some cool way to write your crypto so that a layman can successfully verify the integrity of a binary?


> What alternative are you suggesting?

One solution might be if some big corporation or even a government, or why not Bill Gates himself, offered a big ongoing bug-bounty for this Open Source Software.


meh.

https://stackoverflow.com/questions/16056135/how-to-use-open...

With that I'm guaranteed AES, a known-good encryption algorithm. I have no idea what these guys are doing without reading through their documentation. Hopefully they didn't roll their own.


That command line doesn't even produce authenticated ciphertext. I'm amazed that's the check-marked best answer on Stack Overflow.


I have seen many questions on security stack exchange and /r/crypto where the correct answer should have been "use age", but because it didn't exist the correct answer was something bad. openssl CLI is not meant to be used in prod (both because not AEAD and because the man page tells you not to use it). gpg is bad. rolling your own CLI tool using libsodium is not for everyone.


https://twitter.com/pwnallthethings/status/12107355525357527...

I'd be nicer but for the "hopefully they didn't roll their own" at the end.


It's possible this is the wrong way to get AES on the command-line; I haven't done it and no need to right now. But that's missing my point entirely.


Your point is that you can do the wrong thing with OpenSSL so that means you don't need a tool that does the right thing?

That is not a solid point.


The point being?

You've linked to a command that's wrong, from a random internet "everybody gets to answer a question and everybody gets to vote for the best answer, no qualifications required" website, and wrote "with that I'm guaranteed AES, a known-good encryption algorithm" as if that means anything.


[flagged]


Authenticated encryption has nothing to do with identity; you've confused message authentication with signatures.


>Ok, for the thick-headed: my point is I'll use AES. Good luck with your toy project.

"AES" without specific qualifications and with instructions sourced from Stack Overflow is an ad-hoc toy project.

Age, on the other hand, is a well specified, approved by several domain experts, best-practices project.

In the end you can do whatever, but don't use some custom solution ("AES" or not) for anything where human lives or money might depend on.

>Also authenticated encryption is not something I want all the time. I don't always want an identity tied to cipher text, even if the identity is anonymous.

Identity is not what "authenticated encryption" is about. It's about protecting about certain classes of cipher-text attacks, which people (bad people, one would assume) can use to get information about your encryption.


Enjoy your efailv2.0!



What you're linking has now literally been edited with an update to suggest the tool age.

The idea that "AES is enough" is like saying you don't need better winter clothes to go skiing because you have a good helmet. There's still more things left you need to protect than that! A secure block cipher mode and key management and IV generation, etc, is mandatory!




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: