Hacker News new | past | comments | ask | show | jobs | submit login
Librandombytes – a public domain library for generating randomness (cr.yp.to)
82 points by tkhattra on Jan 26, 2023 | hide | past | favorite | 77 comments



How does this compare to what you get from one of the csprngs in rust's rand crate? I'm not sure when I'd use this.

https://rust-random.github.io/book/guide-rngs.html#cryptogra...


There shouldn’t be much of a difference, although the rust crate is more similar to the OpenSSL part. Just make sure to properly seed your CSPRNG, e.g. via using Rust’s ThreadRng[1]. In basically all cases there is no reason to use anything from the rand crate except ThreadRng.

1: https://rust-random.github.io/rand/rand/rngs/struct.ThreadRn...


If you have to seed it, you're doing something wrong. You want to be using the system random number generator, the one the kernel provides, in preference to any userland RNG.


ThreadRng is what the GP recommended. From the linked page:

> ThreadRng is automatically seeded from OsRng with periodic reseeding (every 64 kiB, as well as “soon” after a fork on Unix — see ReseedingRng documentation for details).

Is the system's random number generator actually better than this? It looks like the rand developers know what they're doing, and using a library like this is attractive because I don't have to figure out the "right" random number API on different operating systems. (arc4random()? Is that only a macos thing? srandom()? random()? What is it on windows? Should I be reading from /dev/srandom or something? Etc.)


Are there any methods of generating randomness on common platforms — Linux (raw or VM), Windows, MacOS — that are suitable for use as a cryptographic one-time pad?

The definition of this library function seems to suggest that it’s suitable:

> librandombytes aims for the following stringent randomness goal: no feasible computation will ever be able to tell the difference between the output bytes and true randomness (independent uniformly distributed random bytes).

However my understanding is that PRNGs are not a suitable source of randomness for one time pads; that this would reduce OTP encryption to being something like an ad hoc stream cipher.

So some implementations that might look random wouldn’t actually provide a suitable bitstream for this purpose: the bits in the output would be correlated, if in a complex, cryptographically obscure way. (But bits in a one-find pad should all be entirely random and uncorrelated.)

Is that accurate?

Do modern PCs have an efficient way to produce meaningful amounts of true stochastic random data suitable for use with OTP encryption (such as the RDRAND instruction)? What are some good abstractions for producing a stream of random data suitable for use with OTP cryptography?

Edit: this is a question for the sake of curiosity. I realize that practical systems have many threat vectors and that OTP is not a panacea, or even necessarily an improvement.


"OTP cryptography" is for the most part not a thing. If you were running a spy ring and literally giving each of your agents a paper pad with numbers on them, you could print them from `getrandom` output; the `getrandom` bytes wouldn't be how that system was attacked.


This is an intellectual question for curiosity’s sake.

I realize that an OTP encryption system will not be a practical improvement for communication compared to e.g. the Signal protocol, which is easier to use and provides a number of other advantageous properties; or something like TLS. (If you could securely exchange one time pad material, then you could also more easily exchange asymmetric keys)

I realize that actual systems have a large number of threat vectors, and that the encryption protocol itself is unlikely to be the primary risk.

Nevertheless, I’m curious: if one decides to implement digital encryption using one time pads, what is the most practical source of suitable randomness?

And have communication protocols been designed for OTP, or would your best bet be to layer it on top of something like Signal or TLS? It seems that you probably need to exchange at least some metadata along with each message describing, e.g., the message size and starting offset into the pad material - metadata that you’d want to protect with encryption, and I don’t see a very straightforward way to protect it using OTP.

On reflection, this probably only has relevance for government or military communications that might need to remain secure for longer than 50 years.

Perhaps the use-case is something like an aircraft carrier, submarine, or other large, critical vehicle or facility, that receives OTP material by delivery of a hard drive protected by considerable physical security.

Critical communication from these vehicles/infrastructure would remain protected against future cryptographic attacks - e.g., if an algorithmic vulnerability is discovered, or there’s a breakthrough in quantum or traditional computing that renders today’s encryption vulnerable within the next few decades — the OTP-protected message content remains protected. (Even if intercepted, recorded, and attacked decades later)

Yes, I agree that the practical applications are limited… The questions are an exercise for curiosity’s sake: if you’re going to employ OTP encryption, then …


To achieve the "theoretical" unbreakability that keeps one time pads perennially in the discussion, you need a true random source of bits: a (sufficiently bias-free) measurable natural process to sample. That's what makes OTPs unbreakable: there's no structure anywhere to attack.

To achieve practical unbreakability, you just need your bits to be indistinguishable from random. You can take a relatively small number of unguessable bits and then run them through the Blake2-based LRNG algorithm to spool out a practically unlimited number of "random" bits. Theoretically you can attack Blake2 and the LRNG system; there could be some vulnerability there. In practice, this would be a deeply shitty setting to try to cryptanalyze anything, even if Blake2 was broken, which it isn't.

If you are going to go through the trouble of literally distributing physical pads to all the counterparties, though, you might as well just generate true random numbers.


The problem with true random is that you might screw up your avalanche diode circuit (or whatever) in a way that makes it random, but not uniformly distributed. It might also pick up EM from somewhere and not "really" be random all the way. By the time you add all the whitening and that sort of thing to make those problems go away I'm not sure how the information theory proofs hold up. Probably what you would want to do is keep the physical random number generator in a fine-mesh Faraday cage in a controlled environment, and run statistical tests for a while before generating your pads. The rng-tools package on Linux has a test tool. Make sure you disconnect the test wires before generating the pads though!


> By the time you add all the whitening and that sort of thing to make those problems go away I'm not sure how the information theory proofs hold up.

Why wouldn't the information theory proofs hold up?

Fun puzzle: Suppose I give you coin you can flip with some bias (weight). Maybe 75% of coin flips are heads. Maybe its 65%. You don't know.

You want to generate a sequence of bits with uniform randomness (exactly 50%). Without first sampling the coin & calculating the bias, how do you generate the uniform sequence of bits?

The answer to that puzzle would probably work fine for your theoretical one time pad.

Also, in practice I suspect most of the obvious ways your random noise circuit could be broken could be detected using statistical methods. You can't use math to prove a random number generator is truly random. But you can certainly detect a lot of common failure modes of "random" number sources.


Rejection sampling works when your bias is constant, not when it varies depending on the environment.


I'd imagine just XORin CSPRNG and HWRNG would solve most of those issues. Even if HWRNG would have problems (say some interference would make it generate much less entropy), attacker can't devise CSPRNG state (even if it was vulnerable) because there is no way to "un-XOR" the output.

Also for OTP nonuniform distribution would "only" cause numbers on average to have less entropy and that could be corrected by just having them be a bit longer.


I don't think you can verify the safety of a one-time pad with statistical tests.


The problem with a true OTP solution is not generating the random bytes. It really isn’t. There are many many ways to do it, that are well documented. Pretty much all modern phones and many (most?) computers in the last 5 to 10 years have hardware rngs, and can generate true random values very fast compared to the old “sample some part of the computer” mechanisms while also being much more robust.

The problem is a true OTP requires both sides to have access to the same stream of random, and they must have enough for all messages. You can’t do that online, because transmitting the random key stream would require it’s own random key stream that was at least as long.

That’s why OTP are impractical.


Getrandom(0) on the latest Linux kernel, assuming the computer is used for a while first (as there's no real way to figure out how much entropy is "enough") is probably fine for actual one time pads. I think it's very unlikely that any future attacks could pwn ChaCha20 (and Blake2s) to that extent. You would have to figure out were you were in the stream, and with what key (possibly multiple keys). That's pretty close to zero information in cryptanalysis terms. You would need to make sure the computer generating the pads, as well as the printers, etc. Are well shielded and not emitting usable EM information, don't store the pads in any way, and are not compromised beforehand in some way, such as a supply chain attack.


If you trust ChaCha20 and Blake2s this way, you should just use them. Most of the point of a one-time pad is not having to trust algorithms like these.

(In reality, just forget that one-time pads exist).


Iirc NATO used a one time pad codebook for ship communication - but I suspect these days it would only be a fallback in case there's a need to use open radio or similar for signaling?


> However my understanding is that PRNGs are not a suitable source of randomness for one time pads

A PRNG (Pseudorandom Number Generator) can have varying levels of (1) Quality, and (2) Adversarial resistance (both are closely related of course). Quality generally can be estimated by quantities like period, correlations and some other measures; there are tools that can give useful measures of quality. Adversarial resistance needs serious cryptanalysis (not just passing randomness tests), preferably relying on well known constructions (like hash functions constructions). When a PRNG is made with (2) in mind, it's usually called a CSPRNG (Cryptographically Secure PRNG). DJB here is presenting a CSPRNG:

> This makes the randombytes() output suitable for use in applications ranging from simulations to cryptography

Daniel is a well known cryptographer and this seems to reuse linux kernel and OpenSSL primitives, so I assume it's fine here to use for any cryptographic applications.

Note that one-time pad is a very inefficient cryptography. Generally in cryptography you don't want to roll your own algorithm except for study/research purposes (the famous adage "Don't roll your own crypto") -- just use something like OpenSSL or libsodium.


> this would reduce OTP encryption to being something like an ad hoc stream cipher.

What do you think a stream cipher is? CTR-mode stream ciphers are just a PRF stream (which a CSPRNG provides) XOR'd with your data, and maybe concatenated with a MAC.

If your PRNG generates the same output twice, your OTP is hosed. Your CTR-mode is also hosed. So, a CSPRNG must not produce the same output twice.

Also, what Thomas said. OTP is not a thing.


> CTR-mode stream ciphers are just a PRF stream

Not exactly a pseudorandom function, but a pseudorandom permutation. This matters as soon as you start approaching the birthday bound:

https://crypto.stackexchange.com/questions/59738/what-are-th...


> the bits in the output would be correlated, if in a complex, cryptographically obscure way.

Yes – but by definition, no feasible computation would be able to detect the correlation; and if nobody can detect that it's there, it does not matter.


Secrecy is temporary, you only need encryption algorithm to protect your data until its expiration date. For AES-128, the key can be recovered with a computational complexity of 2^126.1 using the biclique attack. If you can do such computation fast enough, you will be able to read much secret data.


Can anyone recommend between Librandombytes and libsodium ramdombytes?

https://github.com/jedisct1/libsodium/tree/master/src/libsod...


If you're using libsodium, use libsodium's randombytes.


When are we going to get a distributed peer to peer randomness service.

I could roll a die in return for $random crypto currency.

Obviously the amount could vary depending on the amount of randomness. So me thinking of a random number would get less than a die roll which would get less than this comment.


If your values don't need to be secret, you can use the latest bitcoin header.[1]

Similar to centralized random beacons like the NIST[2] and Chile beacons.

1: https://eprint.iacr.org/2015/1015.pdf

2: https://csrc.nist.gov/projects/interoperable-randomness-beac...


Why would one need randomness that isn't secret?


Verifiable lotteries and time stamping are the use cases I am aware of.


> When are we going to get a distributed peer to peer randomness service.

(assuming you mean computers returning random numbers, and your people with dice is an analogy)

The problem with that is how do you trust the source if you need “cryptographically secret” random numbers. One of the sources could poison the well with bad entropy and increase by a small but significant amount the chance of guessing your keys.

OK, so you could the data from many sources, but that would add latency so not an option where performance matters and even then if someone gains control of a significant portion of the distributed system (just by standing up lots of hosts) the issue persists.

You could also do a bunch of statistical tests, but again that is work that will harm performance and if you are going to that sort of effort anyway you could setup your own random sources (a couple of active Linux boxes, on older Linux kernels running haveged, on newer ones the latter part isn't needed (https://github.com/jirka-h/haveged/issues/57)) and use those tests to make sure those sources are statically safe.

So such a service isn't really needed, and where it might be isn't likely to be trusted, so it could exist but as a play-thing not a serious service.

On my little home server, not a particularly up to date CPU etc, running 5.10, I can pull >3Gbit/sec from /dev/random. Heck, the Pi400 that is currently my router can hand out ~230Mbit/sec or entropy.


>The problem with that is how do you trust the source if you need “cryptographically secret” random numbers.

Well in the modern age this is solved by upvotes and reputation score.

>it could exist but as a play-thing not a serious service.

I wasn't being particularly serious, well I'm seriously surprised someone hasn't tried it, but I'm not investing my own hard money in the idea so...


Cloudflare has exactly that: https://blog.cloudflare.com/randomness-101-lavarand-in-produ...

> LavaRand is a system that uses lava lamps as a secondary source of randomness for our production servers. A wall of lava lamps in the lobby of our San Francisco office provides an unpredictable input to a camera aimed at the wall. A video feed from the camera is fed into a CSPRNG, and that CSPRNG provides a stream of random values that can be used as an extra source of randomness by our production servers.


And the League of Entropy, which includes Cloudflare, for data that needs to be random but doesn't need to be secret: https://blog.cloudflare.com/league-of-entropy/


Both seems like a wank project made by bored engineers tbh...


It's a thing. As with all blockchain things, it's not an efficent thing, but it exists: https://ethereum.stackexchange.com/questions/191/how-can-i-s... .

The TL;DR is that:

- Everyone playing comes up with a random number X, which is kept local for now

- X is hashed, such that HASH(X) = Y, where Y is then sent to a server where everyone can view it (aka some blockchain smart contract). Some other stuff around preventing impersonators happens as well, but the hash being sent is the important bit. This is your commitment to the game.

- When everyone has submitted their Y values, they all send in their original X values as well

- Every X values is verified such that HASH(X) = Y.

- All the X values are combined, and used as a seed to an RNG (i.e. something like concatenation, run a KDF over the concatenation, use that output as a seed to an RNG)

- Now everyone knows which player run virtual coins, without a central decision point.


Yes but how are they getting that random number X?

Is it from some rng generator or is it from nuclear decay of a Cesium atom?

This is what my product provides. Truly random (TM) numbers!



I do find it interesting that the quality of a random number is completely divorced from the actual number itself.

If I gave the number 1 that could be random or not. The fact that it's a 1 is unimportant. Only how it was derived.

I suppose it's like free range eggs in that respect. But free range eggs and mathematics don't seem to be natural bedfellows.


that sounds like a great idea! I'll return 3 when peers ask me for a random number, and then I'll start seeing if that seed shows us in any prominent RSA private keys


As long as the 3 was randomly derived that's fine.


This is a randomness distillation function that gets entropy from a system source like linux getrandom() or the OpenSSL RNG. It's nice but it is purely computational. It doesn't harvest entropy on its own, if that is what you were hoping.


> It doesn't harvest entropy on its own, if that is what you were hoping.

I'm actually hoping that userspace applications stop attempting any such thing if it can be at all avoided (and on most systems, it can).


Why it would need to "distill" anything on modern Linux/BSD?


I think the idea is to be fast by avoiding system calls and using fast primitives underneath, and to unreversibly garble the internal state quickly. I don't see very obvious applications for this though. If you think your computer is leaking internal state, you have bigger problems. Otherwise, use getrandom() to seed your favorite stream cipher.

I do want to look at this more closely, because if DJB thinks it is worthwhile, there is likely to be something to it. But it doesn't jump out at me after the quick glance that I took.


Well the lib is from 2008 where randomness story looked worse. Might be worthwhile back then.

Also

> Another virtue of having a randombytes() abstraction layer is that test frameworks can substitute a deterministic seeded randombytes() providing known pseudorandom bytes for reproducible tests. Of course, the randombytes() provided by these test frameworks must be kept separate from the fresh randombytes() used for deployment.


arc4random() has existed to produce cryptographically random values for well over a decade (and despite the name, on Mac at least is not rc4 based and I assume Linux is the same). Additionally windows and Darwin/xnu have API to get large arrays of random values and I again will just assume Linux does too. This library should not be doing anything other than wrapping the specific api provided by the host platform.


Small program to generate random bytes to stdout, using original 2008 randombytes() function that presumably inspired librandombytes.

Usage:

    a.out number_of_bytes
For example,

   a.out 128 > data
   od -tx1 -An < data
Taken from public domain source code by djb and jmojzis. Tested on Void Linux with musl.

    #include <stdlib.h> 
    #include <unistd.h> 
    #include <sys/types.h> 
    #include <fcntl.h> 
    #include <poll.h> 
    #include <errno.h> 
    #include <sys/stat.h> 
   
   int strtonum(long long *,const char *);
   int strtonum(long long *r,const char *buf){
       char *bufpos=(char *)buf;
       int flagsign=0;
       long long i;
       unsigned long long c,ret=0;
       if (!buf) goto failed;
       switch(buf[0]){
           case 0:   goto failed;break;
           case '+': ++bufpos;break;
           case '-': flagsign=1;++bufpos;break;
           default:  break;
       }
       for(i=0;bufpos[i];++i){
           c=bufpos[i]-'0';
           if(c<0||c>9)break;
           c+=10*(ret);
           if(ret>c)goto failed; 
           ret=c;
       }
       if(i==0)goto failed; 
       if(flagsign){*r=-ret;if(*r>0)goto failed;}
       else{*r=ret;if(*r<0)goto failed;}
       return 1;
   failed:
       *r=0;
       errno=EINVAL;
       return 0;
   }
   
   int writeall(int,const void *,long long);
   int writeall(int fd,const void *xv,long long xlen)
   {
     const unsigned char *x=xv;
     long long w;
     while(xlen>0){
       w=xlen;
       if(w>1048576)w=1048576;
       w=write(fd,x,w);
       if(w<0){
         if(errno==EINTR||errno==EAGAIN||errno==EWOULDBLOCK){
           struct pollfd p;p.fd=fd;p.events=POLLOUT|POLLERR;
           poll(&p,1,-1);continue;
         }
         return -1;
       }
       x += w;
       xlen -= w;
     }
     return 0;
   }
   
   void randombytes(unsigned char *,unsigned long long);
   /* it's really stupid that there isn't a syscall for this */
   static int fd = -1;
   void randombytes(unsigned char *x,unsigned long long xlen)
   {
     int i;
     if(fd==-1){
       for(;;){
         fd=open("/dev/urandom",O_RDONLY);
         if(fd!=-1)break;
         sleep(1);
       }
     }
     while(xlen>0){
       if(xlen<1048576)i=xlen;else i=1048576;
       i=read(fd,x,i);
       if(i<1){sleep(1);continue;}
       x+=i;xlen-=i;
     }
   }
   
   int fsyncfd(int); 
   int fsyncfd(int fd){
       struct stat st;
       if(fstat(fd,&st)==0&&S_ISREG(st.st_mode)){
       if(fsync(fd)==-1)return -1;}
       return 0;
   }
   
   void byte_zero(void *,long long);
   void byte_zero(void *yv,long long ylen){
       long long i;char *y=yv;
       for(i=0;i<ylen;++i)y[i]=0;
   }
   
   static unsigned char buf[4096];
   int main(int argc,char **argv){
       long long i,l;
       if(!strtonum(&l,argv[1])||l<0)exit(0);
       byte_zero(buf,sizeof buf);
       while(l>0){
           i=l;
           if(i>sizeof buf)i=sizeof buf;
           randombytes(buf,i);
           if(writeall(1,buf,i)==-1)exit(0);
           l-=i;
       }
       if(fsyncfd(1)==-1)exit(0);
       exit(0);
   }


getrandombytes() in librandombytes uses getrandom() or getentropy() or /dev/urandom (in decreasing order of priority), while getrandombytes() above uses /dev/urandom exclusively.


librandombytes is still exclusively for Linux, according the documentation included, unlike the original from 2008.


"getrandombytes()" where do you see that above. Maybe you mean randombytes().


sorry, yes i meant randombytes()


Is there an actual license file somewhere? Not only is the title of the page not necessarily authoritative enough, but public-domain dedication is not a thing in many countries, which is why CC-0 exists.


> but public-domain dedication is not a thing in many countries, which is why CC-0 exists.

Has there ever been a real instance of this distinction actually mattering? Has a German software company ever gotten into real trouble because they used American public domain code without a locally valid license?

It seems like an academic objection for lawyers to wring their hands about. Risk adverse organizations with investors will demand licenses for software like SQLite because a few thousand dollars to eliminate a minuscule remote risk is basically nothing to a software business. But does your average German FOSS hacker bother to buy a license to SQLite? Would that really be a rational use of their own money? I doubt it.


SQLite3 is in every phone and every laptop and... Do Apple, Google, etc. have to do something special in order to use SQLite3 in Germany? Have there been any court cases about this? Have there been any fines issued or paid over this? IMO the whole Germany-doesn't-have-public-domain thing is just FUD.


SQLite offers to sell licenses to organizations that worry about it, Apple and Google have probably bought such licenses to cover their asses in Germany just in case.


Yeah, but does anyone really bother? Is there any German case law about this?


Germany uses civil law (Roman system), not common law.


The courts might not make common law, but there could still be no court cases at all about public domain software, which would be pretty telling that those just don't come up even though there's an abundance of public domain source code.


That is true, still it has a comprehensive copyright page which includes this rather explicit permission:

> Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original SQLite code, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.

https://www.sqlite.org/copyright.html

This is quite different from librandombytes...


On some level, no open-source license matters, you are not going to get in trouble for stealing some rando's GitHub repository. In fact they would never find out.

However if this aspect is important enough to you that you put it in the very title of your site, you should probably do it in a way that actually works for people.


> However if this aspect is important enough to you that you put it in the very title of your site, you should probably do it in a way that actually works for people.

I assert that public domain does work for people, even Germans in practice. It doesn't work for risk averse corporations.


The point of a license is entirely to mitigate risk. I trust open-source developers to not go after me and my meager projects, but still I appreciate when they take the 2min needed to slap a legal-like document on their library.

When you refuse to do that, and decide to spend way more than 2min explaining your belief that this might not be required (though you are not a lawyer, have no court decisions to back it up, and have otherwise done a limited review of a few countries), you are making the conscious decision to go out of your way to increase the risk on me. I don't appreciate that, but does that really make me "risk-averse"?


American FOSS developers who put their code in the public domain are taking those 2 minutes to slap a legal-like document on their code. A short document telling other programmers that the code is public domain clearly communicates the intent and wishes of the author to other developers.

They're giving something to the world for free, with no strings attached, clearly communicated. But despite that, some people will complain because it wasn't done in precisely the correct way to keep corporate lawyers in a notoriously legalistic and pedantic foreign country happy.


Any legal-like document that mitigates risk for one parry does so by restricting another party.

So, its natural that people will choose not to do more of than they see as necessary to deal with speculative risks raised by third-parties who are often either not attorneys, or attorneys for people whose interests are not aligned with those whose action is sought, based on some foreign legal system with which thr actor is unfamiliar.

If you don't like what you are being offered for free, you are, of course, at liberty to move along.


> you are, of course, at liberty to move along

I am also at liberty to offer a comment on it on this here comment section... If you feel like the point of this site is to offer silent upvotes, you are at liberty to do that.


DJB has also written about the public domain: http://cr.yp.to/publicdomain.html


Usually copyright is the author's right and only the author can enforce that right, so it would take the public domain author to sue users, which I suppose rarely happens.


Not sure. But I think SQLite won't look at contributions from non-public domain countries. I mean, they aren't really open to contributions anyhow, but being public domain was mentioned to me by Richard as a mistake they made that they've had to deal with.


> being public domain was mentioned to me by Richard as a mistake they made that they've had to deal with

This doesn't sound right. Did he explain to you why he thinks he's stuck with it? He has the legal right to release SQLite other some other license and does so when companies pay him for it.

Generally speaking, nothing about putting code in the public domain precludes collaborating with other developers. SQLite's caution against accepting contributions (even when the contributor is another American willing to sign over their contribution to the public domain) probably has more to do with Oracle being notoriously litigious and nasty. Not accepting contributions reduces the risk of one day being sued by Oracle, since it reduces the risk that Oracle IP might accidentally show up in SQLite. This would be a concern regardless of what sort of license SQLite used.

If you're not worried about that sort of thing, there is nothing which prevents an American FOSS developer from accepting public domain contributions from other developers.


Going from memory, he was warned by legal or something, that someone from a place where public domain didn't exist as a concept couldn't give them code and say it was public domain.

He's on here occasionally, so maybe he'll jump in but I think the issue is the contributors might not be able to put things into the public domain.


https://cr.yp.to/publicdomain.html

I'm not taking a stance, I'm just the messenger.


Interesting. It is on purpose then, for equal or worse.

Maybe he's right, maybe he's wrong. What he's not is a judge or lawyer, so I'll keep with the status quo of licensing.


What he isn't is a German. He's an American and the public domain is healthy and well established here.


djb is also a German citizen https://cr.yp.to/cv/cv-20080915.pdf (Citizenships: USA; Germany; Native language: English. Language courses: French (advanced), German, Danish.)


This is a whole can of worms with Bernstein. But the library is pretty trivial, so if this really worries you, just use `getrandom`.

The actual source files are all labeled "public domain". That's all you're going to get from him.


> public-domain dedication is not a thing in many countries

See the author's other page with thoughts on this subject: http://cr.yp.to/publicdomain.html


Wow, is this really going to be the top comment.

This library has been around since 2008 and chances are you are using software that includes it right now, e.g., if your web browser used djb-derived cryptography included in your browser when connecting to HN.

If I wrote software using djb code and made money from it, and djb asked me for a percentage, then I would happily pay. The chances of that happening are probably nil, particularly the first part. Alternatively, if I wrote software using djb code and he asked me to refrain from doing something, then I would probably comply. He has a strong reputation for being right. Again, the chances of being asked for anything seem to be nil anyway.

Assuming devurandom.c, i.e., randombytes(), is used in djb-authored cryptography, which itself or close derivatives are now used widely, how many of the folks working on the projects listed at the web pages below are using public domain djb code without a license (I don't know the answer, I'm asking a question). One might guess by looking at these lists that the "status quo" is to accept djb's public domain designation.

https://ianix.com/pub/curve25519-deployment.html

https://ianix.com/pub/ed25519-deployment.html

https://ianix.com/pub/salsa20-deployment.html

https://ianix.com/pub/chacha-deployment.html

Take OpenSSH for example.

https://raw.githubusercontent.com/openssh/openssh-portable/m...

OpenSSH uses djb cryptography for generating ed25519 keys, among other things. That uses randombytes() as in crypto_api.h above.

djb's randombytes() is simply a wrapper around /dev/urandom, OpenSSH's randombytes() in ed25519.c is a wrapper around arc4random_buf(). arcrandom_buf() is a wrapper around arc4random(). arc4random() uses djb cryptography placed in the public domain.

https://raw.githubusercontent.com/openssh/openssh-portable/m...

No idea what software the parent writes but as an end user I would prefer to avoid it since apparently they have to avoid widely used, publicly vetted code because it's in the public domain.


If you don't care whether it's public domain, that's fine. That seems completely orthogonal to the discussion of whether that works, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: