Hacker News new | past | comments | ask | show | jobs | submit | supakeen's comments login

Cool idea. I would be interested in knowing the concentration you achieve and flow rates (LPM/CFH). This wouldn't work in my workshop as we don't treat our equipment nicely enough and we don't have sparkies to tend to it.


> Roberts, 38, now only gets fast food "as a rare treat".

I feel that that is it should be? When I was young it was important enough to mention at the monday class circle if your grandparents had taken you to visit the Golden Arches and everyone would be very jealous.


Growing up in the 80's, I got fast food (McD's, Pizza Hut) maybe once every two or three weeks. I got to go to a "nice" restaurant perhaps twice a year.

My parents however, left us with a Tombstone pizza or Swanson's TV dinner three times per week when they went out to dinner.


> My parents however, left us with a Tombstone pizza or Swanson's TV dinner three times per week when they went out to dinner.

I would consider this negligent parenting due to the poor nutrition of those meals.

It’s not time consuming to cook some lentils or other protein and add some spice and eat some yogurt.


I have a personal rule that if I can remember the last time I ate fast food, it's too soon to eat it again. That seems to space it out to no more than a handful of times a year, and I agree that seems to be about the oftenest I would like it to be.


Growing up poor (mom was single, working as a secretary in the 1970's trying to support two small kids) Shakey's pizza offering free drinks on Tuesday nights (if you ordered a large pizza and brought in a coupon) was our special night out.

Fast food has never been "normalized" for me and I think that is a good thing.


For an entire family, maybe so, but let's not confuse fast food for that rare gourmet meal experience.

It's niche is good value convenience food for those on-the-go, and cheap enough for kids parties etc.

If it's now premium prices, local options are likely to be way better. I mean I'm not in US, but there's absolutely no way I'm paying $12 for a mcdonalds quarter-pounder-and-cheese - I remember them being famously under $1 and thus a great car snack on the way to a meeting or whatever.

And in case this is their intent, I really don't think the chivas regal effect can apply in this case - that's reserved for when the average punter can't really discern quality.


I don't disagree from a nutrition standpoint, but the very American innovation of cheap food arriving at your table fast now being seen as too expensive isn't a good sign from an economical perspective. This is like saying that if gas hit $10 that it's actually a good thing since people should drive less anyway. Like, yeah, but that's really not the key issue. Maybe the word orthogonal is what I'm looking for here?


Fun fact: That's the exact argument a lot of EU politicians made after the cutoff from russian oil and gas caused price shocks.

Like yes, we do have to phase out fossil fuels sooner than later, but maybe that's not the core issue here?


I've often thought about why the default implementation of many randoms around programming languages is to use LSFRs, MTs, and other fast RNGs in the 2020s.

It seems to be better to err on the side of 'people dont know if they want a PRNG or a CSPRNG' and switch the default to the latter with an explicit choice for the former for people that know what they need :)


> It seems to be better to err on the side of 'people dont know if they want a PRNG or a CSPRNG' and switch the default to the latter with an explicit choice for the former for people that know what they need :)

That’s exactly what we did in PHP 8.2 [1] with the new object-oriented randomness API: If you as the developer don’t make an explicit choice for the random engine to use, you’ll get the CSPRNG.

Now unfortunately the hard part is convincing folks to migrate to the new API - or even from the global Mt19937 instance using mt_rand() to the CSPRNG using random_int() which is already available since 7.0.

[1] https://www.php.net/releases/8.2/en.php#random_extension


OpenBSD had a similar problem with people calling arc4random and getting RC4 randomness, but they just changed it to use ChaCha20 anyway and backronymed it to "a replacement call for random".

https://man.openbsd.org/arc4random.3


Recently I started using a new golang library that generated random IDs for different components of a complex data structure. My own use case had tens of thousands of components, and profiling revealed that a significant chunk of the time initializing the data structure was in crypto/rand's Read(), which on my Macbook was executing system calls. Patching the library to use math/rand's Read() instead increased performance significantly.

In addition to math/rand being faster, I was worried about exhausting the system's entropy pool for no good reason: in this case, the only possible reason to have the ID's be random would be to serialize and de-serialize the data structure, then add more components later; which I had no intention of doing.

Not sure exactly how the timing of the changes mentioned in this blog compare to my experience -- possibly I was using an older version of the library, and this would make crypto/rand basically indistinguishable from math/rand, in which case, sure, why not. :-)


Do note that "exhausting the system's entropy pool" is a myth - entropy can't run out. In the bad old days, Linux kernel developers believed the myth and /dev/random would block if the kernel thought that entropy was low, but most applications (including crypto/rand) read from the non-blocking /dev/urandom instead. See https://www.2uo.de/myths-about-urandom/ for details. So don't let that stop you from using crypto/rand.Read!


once you somehow got 256 bit of entropy it will be enough for the lifespan of our universe.


> this would make crypto/rand basically indistinguishable from math/rand, in which case, sure, why not. :-)

It's closer to the other way around. crypto/rand was not modified in any way, its purpose is to expose the OS's randomness source, and it does that just fine.

math/rand was modified to be harder to confuse with crypto/rand (and thus used inappropriately), as well as to provide a stronger, safer randomness source by default (the default RNG source has much larger state and should be practically impossible to predict from a sample in adversarial contexts).

> I was worried about exhausting the system's entropy pool for no good reason

No good reason indeed: there's no such thing as "exhausting the system's entropy pool", it's a linux myth which even the linux kernel developers have finally abandoned.


Others have already addressed the impossibility of exhausting the system entropy pool, however, I would add that you can buffer Read() to amortize the cost of the syscall.

Also, make sure that your patch does not introduce a secure vulnerability as math/rand output is not suitable for anything security related.


One of the better arguments for using a CSPRNG (here, ChaCha8) is that they benchmark it within a factor of 2 of PCG. The state is still comparatively large (64 bytes vs 16), but not nearly as bad as something like mt19937 or the old Go PRNG. (If the CSPRNG was much much slower, which is generally true for CSPRNGs other than the reduced-round ChaCha variant, it becomes a less appealing default.)


How did you get to 64 bytes of state? Last I looked, Go's ChaCha8 implementation had 300 bytes of state. Most of that was spent on a buffer which was necessary for optimal amortized performance.


That's correct - the state is 300 bytes (36 uint64 + 3 uint32). https://go.dev/src/internal/chacha8rand/chacha8.go


Fair enough. I was just thinking of base ChaCha state without the buffering. 300B is still significantly better than mt19937 (~2.5kB) or the old Go generator (4.9kB!).


I know posting "I agree" is not generally welcomed on here, but ChaCha8 is really underappreciated as a MCMC/general simulation random number generator. It's fast, it's pretty easy on cache, and it won't bias your results, modulo future cryptanalysis.


Whats are other cases where you would need the former? I can only think of fixed seeding for things that need reproducible results (e.g tests, verifications).

I think theres another little force that pushes people towards the PRNG even when they don't need seeding: CSPRNG api always includes an error you need to handle; in case the sys call fails or you run out of entropy.

I'm curious how often crpyto.Rand read fails? How much random do I have to read to exhaust the entropy of a modern system? I've never seen it fail over billions of requests (dd does fine too). Perhaps a Must/panic style API default makes sense for most use-cases?

Edit to add: I took a look at the secrets package in python (https://docs.python.org/3/library/secrets.html) not a single mention of how it can throw. Just doesn't happen in practice?


> CSPRNG api always includes an error you need to handle; in case the sys call fails

A user-side CSPRNG — which is the point of adding a ChaCha8 PRNG to math/rand — performs no syscall outside of seeding (unless it supports reseeding).

> you run out of entropy.

Running out of entropy has never been a thing except in the fevered minds of linux kernel developers.


> Running out of entropy has never been a thing except in the fevered minds of linux kernel developers.

Linux used user input and network jitter to generate random numbers, not a pure pseudo random number generator. For a perfectly deterministic pseudo random number generator entropy is only required for seeding and even then you can avoid it if you have no problem with others reproducing the same chain of numbers.


Cryptographically-secure PRNGs are also deterministic, but as long as you have at least 256 bits of unpredictable seed, the output remains unpredictable to an attacker for practically forever.

Linux used/uses user input and network jitter as the seed to a deterministic CSPRNG. It continuously mixes in more unpredictable bits so that the CSPRNG can recover if somehow the kernel's memory gets exposed to an attacker, but this is not required if the kernel's memory remains secure.

To reiterate, running out of entropy is not a thing.


The difference between “I don’t have enough entropy” and “I have enough entropy to last until the heat death of the universe” is only a small factor.

Attack on the RNG state or entropy is much more of a risk. The entropy required is not a function of how much randomness you need to generate but how much time the system has been running.


Thankfully, Go is considering making crypto/rand infallible, because as it turns out the syscalls do not actually fail in practice (it's not possible to "run out of entropy"): https://github.com/golang/go/issues/66821


> CSPRNG api always includes an error you need to handle (in case the sys call fails or you run out of entropy).

> Perhaps a Must/panic style API makes sense?

Yes, CSPRNGs APIs should be infallible.


I like the approach of “all randomness on a system should come from a csprng unless you opt out”. It’s the stronger of two options where you lose a small amount of perf for a much stronger guarantee that you won’t use the wrong rng and cause a disaster. It’s a shame that this is still a sharp edge developers need to think about it pretty much all languages.


In 2024 that's the right answer. But the programming world does not move as quickly as it fancies it does, so decades ago when this implicit standard was being made it would be tougher, because the performance hit would be much more noticeable.

There's a lot of stuff like that still floating around. One of my favorite examples is that the *at family of file handling functions really ought to be the default (e.g., openat [1]), and the conventional functions really ought to be pushed back into a corner. The *at functions are more secure and dodge a lot of traps that the conventional functions will push you right into. But *at functions are slightly more complicated and not what everyone is used to, so instead they are the ones pushed into the background, even though the *at functions are much more suited to 2024. I'm still waiting to see a language's standard library present them as the default API and diminish (even if it is probably impossible to "remove") the standard functions. Links to any such language I'm not aware of welcome, since of course I do not know all language standard libraries.

[1]: https://linux.die.net/man/2/openat , see also all the other "See Also" at the bottom with functions that end in "at"


Really the only web interface I consider putting on my machines is this one as it uses the normal system as available instead of doing everything custom.


Yes, Nim macros can fiddle with the AST: https://nim-lang.org/docs/macros.html

You can also see another (I think) neat example in `npeg`: https://github.com/zevv/npeg?tab=readme-ov-file#quickstart


A longer read on the design and delivery process of a super fast hardware project for a large hacker camp :)


We use 'e': latines, etc.


We who? -e and -@ are both used, sure, but similar studies snd survey to those showing that -x is rarely used in the community show those are even more rare.


Not sure about the culture of Americans with Hispanic heritage, but in actual Spanish countries, at least the ones I have experience with, @ is common (and a bit old), e is somewhat common and x is pretty much unheard of.


It sadly seems that information on the outage is slow and far in between, stuck on an update once every 8 or so hours apparently with a bunch of VMs sadly still being unreachable and unresponsive in the management console :(


This doesn't remove the object from the DOM after the animation is over, which the example on the website does. You'll need to add something that removes the element after the transition is over which would be a bunch more code (the eventlistener for transitionend) :)


Thanks for the heads up before the Hacker News edit window closed. +1 LOC


FYI you can set a "delay" in your profile of up to 10 minutes before your comments are shown, giving you more time to edit. May not have helped in this case but it's a cool feature.


thats's a nice tip. Thanks


one can learn a lot from reading the FAQ: https://news.ycombinator.com/newsfaq.html :)


:D that's fitting for HN.

Instead of finding a more descriptive name or adding a short description below, they put it on a completely different site.


Congrats, Nim is really becoming more and more mature as time goes on and showing new features like these are great though they do come off a bit as "a not done language"?

What is the plan for ARC/ORC stability and possible default? Are we looking at 1.6 or later (1.8?). Will the GCs (since ARC isn't really a GC) be deprecated at that time?


One of the links in the end of the article leads to the RFC which talks exactly about this, see last commit by Araq especially: https://github.com/nim-lang/RFCs/issues/177#issuecomment-696...

In short: yes, the current plan is to phase out other GCs over time.


Perfect, all except markAndSweep/go, thank you!


> Will the GCs (since ARC isn't really a GC) be deprecated at that time?

Any form of reference counting is definitely a GC.

http://gchandbook.org/contents.html (chapter 5)


I guess the more interesting questions is whether it happens at compile time or run-time - ARC definitely injects derefs at compile time and therefore there are no gc-"pauses" at runtime.

(Right?)


There are always pauses at runtime, when a reference reaching the count of zero starts a domino effect of references being decreased to zero.

Which is why in most high performance RC, you get a tracing GC in disguise, because the actual deletion is moved into a background cleaning thread.

An example of this in production is the C++/WinRT framework for COM/UWP.


In that regard it's not too dissimilar to Rust's semantics when memory is free'ed (https://news.ycombinator.com/item?id=23362518).

Nim's ARC does inject refs/decrs, but they're not atomic which means it's overhead can be pretty minimal. I don't know if ObjC/Swifts ARC uses atomics or locking. GTK's object system for example uses locking which makes it pretty expensive.


Ah, that's true.


if you want to collect cycles, you have to find them, and may cause pauses.


> Will the GCs (since ARC isn't really a GC) be deprecated at that time?

Not sure why you don't consider ARC a gc - it is, with the caveat that it doesn't break cycles on its own. If you have cycles you aren't going to break on your own, use ORC.


Is there a good reason to invest time in Nim for someone who already knows and likes Rust?


It entirely depends on what kind of things you want to do. For example if you want to program microcontrollers with a modern language and 0 overhead over C then Nim is a great language to learn. Or if you want to write a web front-end and back-end in the same language that compiles to JS and C respectively then Nim is yet again a great language. Or if you just want to get into programming with macros to boost your productivity or make nice patterns in your code, then Nim is a great language. Or if you are just curious as to what else that is out there, then Nim is a great language.


I am more interested in having a great ecosystem and tooling. Probably I just want a better, faster and safer Java/C#/Go. All of these seem to be quite true with Rust.


Whilst there is a pretty well fleshed out stdlib, since Nim can compile to C, C++, Javascript, and even LLVM, you can use any library written for those languages/platforms. That's a huge mass of ecosystems that are natively accessible.

Nim's FFI is excellent and makes this very easy without worrying about the ABI (since you're compiling to the target languages).

There's also excellent interop with Python with embedding Python within Nim or calling Nim from Python.


This sounds like "wishful programming" without type safety (or needs writing types as libraries). The only place where I've seen this working well is in TypeScript and it has a massive community (and Microsoft) behind its back.

Currently I write backends in Rust and frontends in TypeScript and it works really, really well.


https://nim-lang.org/ -- says "statically typed" within the first 7 words of the main description of the language on the homepage... Most of the features happen through the type system (eg. functions are always top-level and looked up through UFCS), and it has generics and so on.


Nim is strongly, statically typed. It's very type safe.

Nim being able to compile to Javascript and, say, C means you can write your server and web client in the same language. This means you can share code between client and server which is particularly handy for having modules with type declarations imported by both for example, so you have a unified place to update them.

You also get to use Nim's strong type guarantees and metaprogramming when outputting Javascript. An example of why this is useful is generating say, RPC calls from a static JSON file that are automatically unified between server and client.


I would too love like c# ecosystem for nim but dont think that is possible especially cuz magic macro stuff. Thing i like about Nim is joy of writing and writing less and kinda started to love whitespaces, only cons i can see is there are not like 20 payed people who work full time for lang


I would say one of Nim's biggest strengths is the productivity of Python with the performance of C; it feels like scripting, but you have production ready performance.

Another productivity benefit is compiling to C, C++, and Javascript means you can natively use libraries from those languages.

If you want to go deeper, check out the extremely powerful metaprogramming capabilities that rival Lisp and I would argue are much more advanced than Rust. The end result of this is very nice syntax constructions which means easy to read code, and again translates to high productivity with no performance loss.


To me this is more of an argument against. Although Python was my first language, I've had my share of burnouts because of its lack of types and difficulties in refactoring big code-bases.


> I've had my share of burnouts because of its lack of types and difficulties in refactoring big code-bases.

Nim is strongly, statically typed. Also, I agree and aside from performance it's why I started using Nim instead of Python for my gamedev hobby. Dynamic typing isn't good for large projects. Nim has great type inference though, so you get the readability of Python but with strong compile time type guarantees.

Eg;

    type SpecialId = distinct int # This int is now type incompatible with other ints
    proc handleId(id: SpecialId) =
      # ... do something with id.


Nim has productivity, expressiveness and readability that matches Python. Not the dynamic typing.


Don't be misleaded - the only thing Nim really has from Python is the off-side rule (indentation). Nim is statically typed and compiled to native binaries


Yeah I should have stressed that it's just the productivity that matches Python, not the operation. As you say it's similarity is only skin deep with significant whitespace.

In terms of use I find it feels a lot like a better Pascal but with powerful metaprogramming.


Another reason: compile times are super fast compared to Rust and C++ even with lots of metaprogramming.


I think this is really important for application programming, and one of the things that is promising to me about Nim. Also the metaprogramming / DSL stuff helps create eg. specific DSLs for UI programming and things like that (something we've seen people like with JSX in React for example). Can do things as far as https://github.com/krux02/opengl-sandbox (OpenGL pipeline DSL) or https://github.com/yglukhov/nimsl (literally converts Nim functions to GLSL (!)).


Compile times can be even faster (like 200 milliseconds) with the TinyCC/tcc backend. (Of course, optimization of the generated machine code suffers some.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: