Hacker News new | past | comments | ask | show | jobs | submit login
Three mistakes from Dart/Flutter's weak PRNG (zellic.io)
96 points by gnabgib 9 days ago | hide | past | favorite | 84 comments





Why did Proton and SelfPrivacy use Random instead of Random.secure?

DTD is local host or over SSH due to the unecrypted websocket.

Browsers block unsecure WS over HTTPS. The key generated is for tagging multiple instances per IDE not security.

Random is instant and is used for runtime collision prevention and performance especially UI, not uniqueness, PRNG is not truly random anyways.

Random.secure has a large overhead accessing OS entropy and isn't always supported. [1]

Go 'math/rand', python 'Random', C# 'Random' to name a few are also not unique or safe for crypto.

Their secure equivalents: Go 'crypto/rand', Python 'SystemRandom', C# 'RNGCryptoServiceProvider'.

This isn't a Dart or Flutter issue [0]:

"A generator of random bool, int, or double values.

The default implementation supplies a stream of pseudo-random bits that are not suitable for cryptographic purposes.

Use the Random.secure constructor for cryptographic purposes."

[0] https://api.dart.dev/dart-math/Random-class.html [1] https://api.dart.dev/dart-math/Random/Random.secure.html


When I wrote a Random class for use at Amazon, I made SecureRandom (which couldn't be seeded, and was a multi-source DRBG) the default, and you had to intentionally choose InsecureRandom. This is how it has to be, and in this case there should only be Random, and Random.insecure IMO.

Yeah, it's a weird critique.

The author acknowledges early on that `Random.secure` is there to provide randomness where it needs to be secure, but then spends the entire article fretting over how the other source of randomness isn't. But it's not supposed to be!

Applications have plenty of reason for cheap, stable, or otherwise controlled forms of randomness and shouldn't have to pay the penalty for -- or face the reproducibility limitations of -- secure randonmness when that's not what they need. It's both useful and traditional for standard libraries to either include both forms, or to leave the secure form for other libraries to contribute.

If an application uses the wrong supply of randomness for their needs, that's an application error. And if developers writing security-sensitive code don't know to think anticipate this distinction and avoid the error, then that sounds like there was a project management error in assigning them such a sensitive task.


Well the article still has a point in that the insecure PRNG is neutered for seemingly no reason, contrary to developer expectations.

But I think we should, as language developers and users, be well beyond the point of pushing out and accepting insecure defaults with a little documentation disclaimer - especially if even the Dart developers themselves can't catch misuse of their own insecure APIs in widespread tools.

At the end of the day, we don't live in a computing world where true randomness is that resource-intensive anymore, and user-facing applications should basically always be relying on it. Non-randomness should really be the opt-in, when developers can actually justify that the performance penalty is a problem. Otherwise, I don't need library authors prematurely optimising my programs at the cost of security.

(There is a similar related discussion to have about the pervasiveness of floating points in programming languages when most applications would be better served by a more expensive, but more precise, numerical representation - since they rarely do that much calculation.)


It’s not random, that’s the point. If you neuter it so much that it’s only useful for a specific use case then make sure it’s named something relevant and specific related to that use case!

I'd say it's because Proton does not employ actual cryptographers or security engineers. They do seem to have a lot of marketing folk, interns, and particle physicists, allegedly.

In .NET it's just 'RandomNumberGenerator' from System.Security.Cryptography. 'RNGCryptoServiceProvider' is obsolete.

This is weird, since it reads as a critique of the insecure random number generator in a language that has a secure random number generator. What does it mean for the insecure random number generator to be insecure? That's the point: it's the one you use when security doesn't matter, because you need speed or determinism.

With any question like this, the path to an answer follows three intermediate questions:

1. Was there a security problem that resulted from the use of this insecure generator?

2. Were the developers who encountered this problem completely unreasonable and in the wrong, or were they simply non-(security)-experts who developed software that happened to have a security vulnerability?

3. If the latter, did the language/toolkit strongly encourage developers to use an insecure generator -- e.g., by making an insecure generator the default or "simpler" option (such as Random() instead of Random.secure()).

4. Is there some broad and general understanding that this toolkit is only intended for applications, such as monte-carlo testing or graphics rendering, where performance at the cost of security will always be the correct solution?

Given that the authors are able to offer several examples of real security vulnerabilities caused by developers using an insecure PRG, I'm going to read the answers as (1) Yes, (2) not unreasonable, (3) Yes, (4) I don't think so.

The decision to make insecure RNGs the default creates security vulnerabilities. This is demonstrable and has broad and unpredictable effects, given that developers are not security experts. The decision to make secure RNGs the default (easier path) has exactly one implication, which is that a small number of specialized applications will have slightly worse performance, performance that can easily be improved by making a conscious decision to remove the secure RNG.


I think most very experienced engineers would strongly disagree with you on (2) and would laugh at the usage characterization you make in (4).

On #2: The difference between a PRNG and a cryptographically secure random number generator is foundational knowledge for software engineers, and there's (finally) a strong trade consensus that writing security-sensitive code has a higher bar of preparation and shouldn't be done naively. It's entirely unreasonable that someone writing a `uuid` implementation that's meant to be a cryptographically secure library for use by application developers wouldn't anticipate that their standard library would offer both and would prioritize the PRNG as its basic and most accessible generator.

On #4: It's very rare that an application developer needs direct access to a cryptographically secure random number generator, especially because of that trade consensus that security-sensitive applications deserve expert care. In contrast, application developers are building test data pools, shuffling arrays, adding noise to signals, adding surprise to gameplay, writing and running reproducible tests, etc every day and naive use of a cryptographically secure random number generator in those contexts is easily paralyzing and catastrophic. The overhead can be order of magnitude in measure (which matters quite a lot in typical highly-repetitive/looped usage), and in some implementations it can introduce non-deterministic and potentially indefinite blocks on execution. And because they specifically can't be made deterministic by design, they significantly frustrate debug and test efforts that benefit from reproducibility. They're an essential tool in modern standard libraries, when they're needed in unusual cases, but are not suitable for naive everyday use at all.


I think the following statements are entirely compatible with one another:

* Most software developers (let's say, 95-99%+) are smart enough to avoid using insecure PRGs in their code.

* The remaining 1-5% have contributed to a litany of avoidable security disasters, nonetheless.

* The impact of these disasters has been vastly outsized when you compare it to the number of bad developers.

* If every developer understood that they needed security expertise (and indeed, that they were even writing security critical code, the world would be a very different place than it is.)

As Thomas mentioned below, I am something of a collector and obsessive when it comes to incidents of bad random number generation. What's fascinating to me about this Dart/Flutter story is not the result, it's that I haven't seen one of these stories in a long time!

By this I mean: this type of incident used to pop up on HN every month or so, and yet in the past few years they've become incredibly rare. And do you know why that is? It is not because developers got better. It's almost entirely due to the fact that development frameworks (particularly JS in browsers) have made a multi-year and systematic effort to reduce the availability of insecure default PRGs to bad developers. The result is that an entire bug class has gone from a common, monthly occurrence, to a relative rarity.

The idea that we should use insecure default RNGs because they're good for reproducible testing is a new one on me. If you need reproducible random numbers for your application, but you are asking to use an interface that does not explicitly include the fact that it generates non-random, reproducible numbers then you are doing both software development and testing wrong, IMHO.


> this type of incident used to pop up on HN every month or so, and yet in the past few years they've become incredibly rare. And do you know why that is? It is not because developers got better. It's almost entirely due to the fact that development frameworks (particularly JS in browsers) have made a multi-year and systematic effort to reduce the availability of insecure default PRGs to bad developers. The result is that an entire bug class has gone from a common, monthly occurrence, to a relative rarity.

That's a fair take and worth considering.

I suspect this whole dilemma speaks to a widening divide between those who see security as the top and inviolate priority in modern software and those still seeing software as something with much more varied and heterogenous concerns. And that gap gets most contentious when it comes to what things like what should be considered default, what capabilities should be available altogether, how convenient those features should be made, what emphasis should be made in education/training/mentorship, etc.

Undoubtedly, (often sloppy) network-delivered and network-attached software has come to dominate the industry and with it the consequences of security-practice lapses have become be more severe than they once had been. But at the same time, we can watch as things like correctness, stability, resource-efficiency, performance-efficiency, maintenance/repair-ability, etc tumble away and turn much of the same software into byzantine garbage that barely runs on X-core YYY-ghz hardware with ZZ-gigabytes of RAM and faults have the time when it does.

So I think there's a real tension, but I get where you're coming from. It may in fact be true that making a seeded PRNG less easily available would be a wise favor to the security-first-and-always people. :D


Just to beat the horse to death, I want to be clear that what I'm asking for isn't much:

1. The default random() call/library should always produce exactly what it says -- real, secure, unpredictable (pseudo)random numbers.

2. For statistical and non-security applications it's perfectly fine to have a generator of the form random.insecureAndFast(). Or call it whatever you want, the important thing is that the developer make a tiny amount of conscious effort in the process of using it.

3. If you require reproducible and insecure generator, just use approach (2) above and add a seeding function. (Honestly I don't like anything that uses global state, but I don't care as long as it's labeled insecure.)

4. If you require reproducible and secure random numbers, then you have a slightly challenging problem on your hands that is way beyond the scope of this discussion.

For people who don't require secure randomness, the total "cost" of my proposal is something like an extra dozen characters added to your code in a few places -- and if that's too much work for you, then you probably aren't writing reproducible tests or spending a lot of time optimizing for speed. On the flipside, you might save someone a few million dollars when their crappy Bitcoin library uses a dependency that relies on random().


I think this is a reasonable solution. I also hope we can standardize a random function which takes an integer range of maximum.

The person you're responding to is pretty familiar with the space you're commenting on, for whatever it's worth to you to know.

Hah! Thanks for bringing that to my attention. :)

I missed it too (sorry, Matt) --- I wrote my comment in bed half awake (I have a problem) and without glasses (I'm old) I can't see for shit.

I didn't even realize I was replying to you, I feel like we've had this argument 100 times now.

That may be, but having the default generator be non-cryptographic has been the conventional design choice in languages since the dawn of time. I would get the willies seeing security-sensitive code using "Random()" even if you told me in your language "Random()" was a CSPRNG.

I don’t think you should have to hold the entire canon of programming history in your head to avoid these types of footguns. Conventional design choices of the past that lead to bugs and security vulnerabilities should be replaced.

Wait. Developers consciously chose to use this random because they believed it had a 64 bit seed? That's still wrong. It's irrelevant that it was truncated to only 32 bits.

Seems like a lot of grief could be avoided if the default was the secure rng... If the insecure one is faster or something, call it random.weak() or something.

It's possible that the secure rng incorporates observed entropy; there might not be enough entropy available when you need it, if secure is made the default and therefore used unnecessarily.

No, in this setting, this is not a thing: the secure generator, if it isn't literally just a wrapper around the OS secure RNG (which it ideally is) is seeded from that RNG. Userland code doesn't observe entropy at all; you are in a state of grave sin if your userland program is trying to do that.

No, they don't mean that that the application would need to be observing entropy on its own.

They're acknowledging that cryptographically useful entropy is a limited resource for the system as a whole, and naive exhaustion can either causes starvation, unbounded blocking, or degradation depending on the implementation.

Having a casual array shuffle seize up because there's not enough secure entropy available might just be an absurd nuisance in many cases. But having an session key or uuid generator seize up because somebody burnt all the entropy on casual array shuffles can be a real problem for users. (And much worse so if the implementation just silently degrades in quality instead of blocking or failing, as some have done.)


"Useful entropy as a limited resource" is just an incorrect meme that has somehow brainwormed a significant fraction of engineers. Once that secure RNG is seeded, you're golden. Randomness is not depleted.

Every mainstream OS is trivially capable of seeding the system secure RNG by the time userspace applications need to consume it. There is no availability / depletion problem for userspace.


That's absolutely not true, and some of us have directly experienced what happens when it runs out. There are implementations that will keep producing output without an ongoing and sufficient supply of fresh entropy, but there are also implementations that do not. Generally, those that do keep producing output in those cases are formally weaker.

When the supply is treated conscientiously, depletion broadly doesn't matter outside of weird race conditions during system startup. But if everybody naively consumes cryptographically secure randomness for every casual occasion, it becomes a much bigger problem.


No, it's definitely true. A conventional OS CSPRNG can be effectively modeled as a stream cipher, where the key is a hash of observed entropy. As you encrypt more bytes with a stream cipher, you don't "deplete key". Real CSPRNGs are more complex than this, but that complexity is there to improve things from the baseline of "hash some interrupts and key ChaCha20 with it"; the argument gets weaker on real CSPRNGs, not stronger.

The parent comment is correct: entropy depletion is a weird meme. Unfortunately, Linux encoded that meme into its legacy KRNG ("/dev/random"), so lots of people believe it to be real. It is not, and modern userland programs don't use "/dev/random" anyways.


> Unfortunately, Linux encoded that meme into its legacy KRNG ("/dev/random"), so lots of people believe it to be real. It is not, and modern userland programs don't use "/dev/random" anyways.

It's not the only such system, nor has that "legacy KPRN" been eradicated from real-world use.

Moreso, I can't understand how someone who seemingly has as much seasoning and experience as yourself can fathom making generalizing claims like "modern userland programs don't X".

Application software and the platforms that it runs are insanely more diverse than that, especially now that we live in a world where people just paste in verbatim docker stacks that they read about in some blogspam tutorial written 12 years ago, and countless others are building on frameworks and dependencies that are themselves broken in ways they oughtn't be (like Flutter, in the article).

While there exist contexts where cryptographically secure randomness may be treated as inexhaustible, we're decades away from being able to take that for granted across "modern userland programs" and its much safer to have people anticipate the dangers of exhaustion than it is to have them presume its universally inexhaustible.


"KRNG" => kernel random number generator.

"LRNG" => (idiosyncratic) shorthand for "Linux's KRNG".

The LRNG never had the problem you're alluding to. It's not as if the literature eventually found "inexhaustible" designs. They were never exhaustible. There is no such danger. You can get to this axiomatically by looking at how a CSPRNG works (again: it's effectively a stream cipher construction), or empirically by reading about it. Either way: no, you're wrong here.


As a project rescue specialist whose had to diagnose and treat exactly this issue more than once, I feel like I'm being gaslit, although I know that's not your intent.

As recently as node 16 or 18 or so, which is still in real-world use despite its age, reaning on the real-word docker images that these projects use, it happens. And when it does, you can rightly guess that all the client's javascript developers were utterly perplexed specifically because they'd never heard of such a concern. But you point to the stack trace for when their call to uuid never resolves and provide all the documentation from familiar bloggers to help them wrap their heads around it, and voila -- the mysterious and catastrophic issue that's been plaguing them is suddenly remedied.

As I said, I appreciate that exhaustion doesn't need to happen in anything with an appropriate design. That's genuinely great. But even if it's just a certain and inappropriate way of using the linux RNG in a certain window of Linux versions (and I'm not strictly convinced its even so narrow a scenario), that's a actually a pretty common real thing that real developers will continue to encounter in the wild for longer than any us might want, and not some baseless myth or meme.


You're not being gaslit by me, but rather by older Linux kernels. There's no such thing as exhaustion. But there's a legacy kernel interface that pretends there is. The implication of using it is that your program will randomly stall, because the kernel is (was) galighting you. But that's not a cryptographic security issue.

Honestly it's kind of refreshing to see a thread like this. This was a live issue on HN like 10 years ago, when cryptographers started yelling at Ted T'so about how dumb "/dev/random" was, and generalist engineers on HN argued about it. But it has since become a settled matter --- here, I mean. Among cryptography engineers: never a doubt.

There are KRNG security issues! You can come up with cases, especially with embedded software or older hypervisors, where code runs with an unseeded RNG. That's very bad. But those issues aren't relevant to ordinary userland code; if your system can't run with the assumption that "/dev/urandom" is seeded, you're fucked for other reasons, so distributions (and hypervisors) go out of their way to ensure that can't happen.

At any rate: there is no case where "exhaustion" comes into play, and there never has been, but there was an archaic kernel interface in Linux that thought there could be such a thing, so there's a whole generation of Linux developers who believe it's a big deal. It is not.


No, you are mistaken, as were the Linux engineers who embedded that assumption into /dev/random. (That has been resolved on the Linux side for some time now.) I will claim (without evidence) to be a subject matter expert on this topic.

"It was an issue in Linux for a while, but isn't in recent kernels" is not really sufficient grounds for acting like it's not an issue.

A language designer or library developer, especially, generally can't assert that their code will only run on the most robust, modern, popular systems currently available. They need to work defensively, anticipating the issues that they may plausibly encounter -- especially where those issues will not be apparent to less expert application developers.

And those less expert application developers are in an even worse position, because many barely even know what they're doing in the first place, making all kinds of foolish decisions while they paste together their docker script and relying on inexpert devops/sre people (or end users!) who make their own choices about what things are running on.

It's encouraging to learn that there are now established, widespread, inexhaustibly secure RNG's out there. But it remains irrelevant to a discussion of what developers need to keep themselves mindful of because there remain many implementations in the wild that don't behave like that and most developers aren't in a position to guarantee which their code will encounter.


No, there's really no space for wriggling here. You use getrandom() or, in the worst case, /dev/urandom, and this kernel nit goes away. It was never the case that the LRNG was "exhaustible"; there was only a broken interface to it.

Personally, I forgot that Linux only fixed /dev/random in 2020 in kernel 5.6. That's not that long ago in terms of enterprise / LTS kernels. I'm sure this has been a surprising pain point for end-users for a long time, and perhaps still is in some environments.

(Yes, I know, you have shared a workaround for a long time prior: https://sockpuppet.org/blog/2014/02/25/safely-generate-rando... . But that sort of presumes a clued-in user.)


Right. But the context here is whether languages can make the (very big) decision to default to a CSPRNG, in which case: you make that decision once, for all your users, and when you do, you don't use "/dev/random".

Right!

New languages should make sure the default random() is cryptographically secure, and hide PRNGs behind weakrandom() or repeatablerandom() or something. Safe and slow defaults are better than fast and unsafe.

Arguably, rolling your own crypto (in this case AES which is customizable) requires a very careful implementation, beyond RNG.

Since dart/flutter is multi platform, using Random.secure for animation has it's own performance issues with interfacing host entropy RNG.

The majority of Dart/Flutter users are creating UI apps.

Few browsers with security policies and OS combination does not allow access to the entropy with Flutter Web in which Random.secure will fail, this isn't exclusive to Dart/Flutter. [1]

NaCL [0] offloads these concerns for developers, especially indie/startup.

[0] https://en.wikipedia.org/wiki/NaCl_(software)

[1] https://api.dart.dev/dart-math/Random/Random.secure.html


Rolling your own security requires nothing more than gumption and willingness to deploy. That doesn't mean it's good security but it means people will do it whether they know all the golden rules. After all, rolling insecure security requires missing 1 small thing in a haystack of thousands and it doesn't matter you reviewed the language defaults when OS version blah blah from vendor xyz defaults to something insecure "because you should have checked the defaults". The same goes towards "this library does these kinds of things so there is no value in languages having secure defaults too" type thinking, they aren't convincing arguments for what security posture of other things should or shouldn't be.

I'm more a fan of "make the defaults as secure as you can reasonably expect to get away with for each step of the way". It'll never be as secure as everyone wants but if you but up against "it's as secure as people would want to put up with by default" then things are at least at a good starting point for others to build from. The hard part is finding out what people are willing to put up with and which tradeoffs are worth it. That default random number generators "only" go at GB/s on most PCs because they produce really good random numbers is probably an easy tradeoff though.


Rust does this.

IMO, it's not worth it: It makes working with random numbers very problematic when working on cryptographic use cases.


Could you clarify this? What problems would you run into just from having the default RNG be secure?

Performance / getting a seed

Why is the default thread_rng from the rand crate a dealbreaker for rust? There are other rngs to choose from rand like `smallrng` that is a small fast unspecified default prng if you don't know what you want even for a prng. If the worst case 300 microseconds of the reseeding ChaCha12 default rng is measurable, then it is your job to make a decision about your random number generator.

I don't think rng seeding has anything to do with the algorithm you choose? Seeding from the os rng is usually what you want even for a prng. If you want to use use the current time there is a `seed_from_u64` if you want.


Seeding from the clock is perfectly appropriate for games, audio / video processing, ect.

Seeding from an entropy source is critical for encryption, but that can take time depending on how it works.


Too late to edit: I meant non-cryptographic use cases

This makes much more sense

Or avoid ambiguity and remove the default random().

- randomWeak() - randomSecure()


Or randomInsecure() and randomCryptographic() to avoid any ambiguity around the word 'weak' for non-security minded programmers[0], and to avoid using the substring 'secure' twice[1] in two very different functions.

[0] In my experience, most of them -- who reads documentation these days...? Or programmers using editors like vim (no slight on vim by the way, don't shoot me!) which (generally) won't alert the programmer to the library options, unless it's been set up with a suggestion/completion/intellisense plugin?

[1] randomSecure() vs randomInsecure(), which can look similar at 3:00am on three hours sleep. Naming the functions explicitly differently makes it patently clear which function is cryptographically safe. Also helps when the interpreter/compiler won't understand the context the function is being called in, therefore can't throw warnings or errors to alert the overtired/overworked programmer that they're using insecure random number generation for crypto.

Big brain idea: Perhaps cryptographically-weak/insecure random number generators should be type incompatible by default, requiring an explicit cast to integer/float? Probably overkill, but definitely wards off mistakes and/or misuses by forcing the developer to type cast out, otherwise the compiler simply refuses, then explains why by fatal error at compile time, or by exception thrown.

But then why not make secure random number generators type incompatible by default? Because we're trying to encourage secure-by-default programming, making it slightly harder (but not onerous) to use the equivalent insecure function over secure one. Create a speed bump to prompt the programmer to think about their code, not a road block.


There is nothing wrong with PRNGs, they are perfectly cryptographically secure when used correctly. In fact, AES-CTR is just XORing the plaintext with a pseudo-random generator (AES'd counter + initialiser). The problem is bad and misused PRNGs.

GP is using "PRNG" to mean "the subset of PRNGs that are not CSPRNGs."

In context I read PRNGs as being compared to CSPRNGs. CSPRNGs should be the default.

sane defaults is always the goal. The only problem is people can't seem to agree on what behaviour is "sane".

So don't have a default. Have securerandom() and insecurerandom() and make the programmer choose.

That has the advantage of avoiding problems in a generation's time: if most platforms moved to random() being secure, it would be excusable if young programmers started assuming that would be the case on older platform too.


It’s already implied by the presence of secure_random and it isn’t insecure when it’s being used for the appropriate use case.

It's only implied by secure_random if you know secure_random exists. This is a very very very well known thing. The default MUST be secure.

PRNGs are insecure by design, why does this article frame the problem as the implementation being insecure? Applications shouldn’t use PRNGs if they need a true random source to begin with.

The only true random sources in the universe are quantum. But it's not practical or performant to hook your workload up to a raw quantum measurement like the spin of a fundamental particle, or radioactive decay.

Instead we take sources that are still believed to be statistically random and influenced by underlying quantum randomness. E.g. RF interference, clock phasing, or an avalanche amplifier that are measures of entropy that is considered to be random to the same extent as say the ideal gas laws. But even those measures can still be imperfect; a probe might have a bias or manufacturing error that shows up later than QC.

To cope with this, we have PRNG designs that can use mixed sources in ways that will provably debias any such errors from a single source, and remain secure. All modern RNGs use these kind of PRNGs or DRBG/NRBG designs as a last stage. Far from being insecure, they are what give us more confidence in the overall system.


> The only true random sources in the universe are quantum. But it's not practical or performant to hook your workload up to a raw quantum measurement like the spin of a fundamental particle, or radioactive decay.

I was under the impression that's how entropy generators for servers work?


This feels like MilkSad.info and the 2020/2021 Cake Wallet flaws all over again

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-3991...

https://milksad.info/posts/research-update-10/


Some missing timelines:

December 11, 2024 - Dart SDK 3.6.0 has been released. December 12, 2024 - Flutter SDK 3.27.0 have been released.


For a PRNG you should probably be using something like a good hash function that passes the SMHasher3 tests to generate it (which funnily enough I just posted now after a long overdue enhancement: https://news.ycombinator.com/item?id=42409577).

Alternately develop something that at least passes all the Dieharder tests.

There are many good options for a variety of requirements.

Maybe the following perspective is missing some key context here (?) but when I skimmed the code for this, I was stunned that such a low quality algorithm was used by Zellic (some kind of high end cyber genius cabal, it seems) and in cryptocurrency purposes. I'm rarely harsh on projects here but bloody hell that's utterly stupid. Especially when there are so many good options.


PRNG = pseudo-random number generator. I think it would have made sense to spell it out in the first sentence and go with the acronym from then on. Like the author does it with multiply-with-carry (MWC). Makes it more readable and consistent throughout the article.

[flagged]


>The article really assumes you know the acronym PNRG

Why would you be reading this article if you do not?

This is the type of critique given when writing for a general audience, not a scientific whitepaper.

(Fun fact: The AP style guide tells folks to write at an 8th grade level because that's reading level of the average American adult. 8th graders do not read white papers on cryptographic design, and when they do, they look up unknown acronyms just like big boys and girls do.)


I expect readers of HN to know the acronym PRNG, and that random must be seeded all the time and must NOT be used for secure things. But we're living in the age of LLMs that create perfect code while programmers know nothing anymore.

This comment really assumes you know the acronym LLM

Are PRNG the letters on an automatic transmission stick?

Or that you have a web browser with search capabilities that you're likely to use before complaining about acronyms...

The term is table stakes for the audience this is written for.


> PNRG

You sure about that?


I’m not a Dart fan so maybe I’m biased, but wow this (from the article) is a failure on many levels: https://github.com/dart-lang/sdk/issues/56609

No, it's totally overblown. As discussed in the comments on that issue as an example, the failure was in whoever wrote Flutter's `uuid` feature.

It's simply incorrect for them to have naively used a PRNG there, regardless of whether it has a static seed or not. They should have used Random.secure all along, and anyone working in a cryptographically sensitive module, like `uuid` should be familiar enough with the distinction to make the right choice.

There's nothing especially harmful about having that static seed and there are even some diagnostic and debugging advantages offered by it. Shuffling the seed on init is a generous convenience for people who want to make sure there's more per-run variety in their randomness without setting a seed themselves, but (when using in a non-secure context) it quickly doesn't matter since anything but toy programs tend to have enough complexity, branching, and data-dependency that the sequence of invocations isn't consistent anyway.


From the code in the issue

    // TODO: Make this actually random 
    static int _initialSeed() => 0xCAFEBABEDEADBEEF; 
Oops.

That TODO was committed on 2022-02-16 and wasn't fixed until 2024-09-2. That is... a long time for something that critical.

Oops indeed. I like using FIXME for such to make it stand out it's a deficiency that should be fixed asap (usually before merging the change altogether) compared to TODOs that are improvements that can be implemented in a more leisurely fashion.

[flagged]


> It's clear just from using the language that it was designed as a more "serious" Javascript replacement

What is your usage of Dart?

JavaScript is a target platform for Dart. The Dart interop story is the point. It’s a cohesive glue language. First class interop with JavaScript, Java/Kotlin, C++ and Swift/ObjC. Community interop supports go and rust.

Maybe I am confused. We are comparing Dart to what again? Yes, Flutter is what Dart is known for and it’s, imho, important for their to be primary customer to drive evolution of language.

I don’t do JavaScript development but anytime I bump into it it’s a cluster with regards to tooling (maybe this has changed in the past 3 years). Dart comes ootb with tooling that is a cohesive experience. The technology is impressive and I think it’s filling a niche.

Dart reminds me of Python 20 years ago. Python shipped with IDLE, glued platform (win32com), corba, java, etc features together.

Dart is doing similar stuff but first class interop from the supplier (Google/dart team) for the platforms (js/wasm, mobile, desktop). It is pretty damn amazing and ambitious (maybe too ambitious?). And pub/package system is nice - dart is statically compiled and Google is one of the main innovators in software supply chain security - I think there is real possibility of it being orders of magnitude more secure than python/javascript. (This is not the case currently)

I guess I see it as a “I can stay in this world and do most of what I need”. Are there sharp edges? Yes. But the tooling and the community can resolve this fairly quickly. I don’t need to jump between abuncha languages with tooling disparaties.


For clarity, I'll admit that I do not have industry experience with Dart. I have built a medium sized mobile application with it, and have spent a pretty good deal of time digging into Flutter's architecture and reading it's source code.

If you are debating the statement I made: "It's clear just from using the language that it was designed as a more 'serious' Javascript replacement", it's simply a fact that the Dart language was originally designed to be a Javascript alternative in Chrome, and the language clearly takes a lot of inspiration for Java and C++ which are widely used at Google.

> Maybe I am confused. We are comparing Dart to what again?

I'm comparing it to other garbage collected languages. I don't think Dart really even has a backend story, but in that case I would compare it to Go, any JVM language, and Javascript/Typescript. For frontend applications I'm comparing it to Javascript/Typescript.

Maybe you could tell me what about my comment is confusing, but I'll rephase it and add some more context. From everything I've read, the Dart team at Google is small an insular. Google has a nasty reputation of cutting niche products and has been doing lots of layoffs in recent years. Even though Dart has an good UI library, there are numerous attractive alternatives that are more familiar and popular. Putting all that information together, I would not choose to use Dart for basically any new project.


> numerous attractive alternatives that are more familiar and popular

There’s React Native. Ionic is stuck in the same limbo that PWA are in and Tauri is obscure. Qt is hardly used in mobile apps and Microsoft’s successors for Xamarin deprecate themselves every few years. What else is there, NativeScript? Kivy? Delphi?


Dart is really not much different from a PWA/Ionic though: the language requires bindings to use native APIs; it's runtime is not native to any platform (where as most platforms do have native web runtimes); and it's rendering system doesn't support native widgets (in fact, it currently uses Skia which is the same rendering library as Chrome).

And yet, it has much higher adoption. I think it might be because PWA have long been second-class citizens on mobile, even with some recent OS updates.

The title had me quietly think to myself "Dart was the mistake".

So many stack/enviornments are leveling up their languages: ObjC => Swift, Java => Kotlin, Erlang => Elixr, C++ => Rust, Javascript => Typescript

Dart feels like one of the befores, not the afters.


Feels is the key to that statement. :-) It’s these “feels” that got USA to its 2024 election outcome.

Are you ok?

>November 1, 2024 — The Google Bug Hunters team decided to not reward nor announce this security fix, because it only affects developers.

Good to know Google doesn't care about my security. Long time Flutter user, but Google expresses disdain for it at every opportunity...

Might see if the C# mobile frameworks have caught up...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: