Hacker News new | past | comments | ask | show | jobs | submit | ynik's comments login

The really horrible bufferfloat usually happens when the upload bandwidth is saturated -- upload bandwidth tends to be lower so it'll cause more latency for the same buffer size. I used to have issues with my cable modem, where occasionally the upload bandwidth would drop to ~100kbit/s (from normally 5Mbit/s), and if this tiny upload bandwidth was fully used, latency would jump from the normal 20ms to 5500ms. My ISP's customer support (Vodafone Germany) refused to understand the issue and only wanted to upsell me on a plan with more bandwidth. In the end I relented and accepted their upgrade offer because it also came with a new cable modem, which fixed the issue. (back then ISPs didn't allow users to bring their own cable modem -- nowdays German law requires them to allow this)

> You can send half-random input in and then send more half-random input in until you’re satisfied that the RNG has gotten a suitable amount of entropy.

This does not actually work. If an attacker can observe output of the CSPRNG, and knows the initial state (when it did not yet have enough entropy), then piecemeal addition of entropy allows the attacker to bruteforce what the added entropy was. To be safe, you need to add a significant amount of entropy at once, without allowing the attacker to observe output from an intermediate state. But after you've done that, you won't ever need to add entropy again.


You’re right, but I did not read GP to suggest otherwise.

GP does not suggest using the output before enough entropy had been gathered, eg see ‘until’ in:

> until you’re satisfied that the RNG has gotten a suitable amount of entropy.


Sibling already answered this. I don’t know how you came to this conclusion.


Not sure if you dropped a "/s".

In my experience, C++ template usage will always expand until all reasonably available compile time is consumed.

Rust doesn't have C++'s header/implementation separation, so it's easy to accidentally write overly generic code. In C++ you'd maybe notice "do I really want to put all of this in the header?", but in Rust your compile times just suffer silently. On the other hand, C++'s lack of a standardized build system led to the popularity of header-only libraries, which are even worse for compile times.


From my point of view it is more like laziness to learn how to properly use compiler toolchains.


First rule of good design: misuse isn’t the humans fault


Even cooler is that it's possible to create an infinite-layer gzip file: https://honno.dev/gzip-quine/


You can still install Windows 11 without a Microsoft account. It requires configuring the installation before you boot from the USB stick.

I use https://rufus.ie/en/ when creating bootable USB sticks, and it turns out that this tool detects when you're trying to create a Windows installation medium, and prompts with a list of useful customizations, including "Remove requirement for online Microsoft account". (if you look through the screenshots on the webpage, there's one with the Windows customization dialog box)


I've used this many times myself and it works great.

However, as I mentioned below, I read something recently that says local account options are being removed in an upcoming version (I can't find the article now).

I presume it means the binaries are being removed from the ISO so this may no longer work (except for Enterprise and LTSC I'd imagine).


Does anyone know what the technical change in this patch is?

I think of branch prediction as being something the CPU does internally; where does the operating system come into play?


I still haven't seen anywhere reporting what, exactly, was the issue, and why would using the "true Administrator" account on Windows (or using Linux) avoid the issue.

My own uninformed guess, given that the "true Administrator" account avoids it, would be some sort of security mitigation, for instance flushing the branch predictor state when entering or exiting the kernel unless the current user is SYSTEM or Administrator.


If you have a hot path with a lot of branch mispredictions that can impact performance. Could be as simple as Microsoft found a dumb if inside a tight loop.


However Windows has stacks that commit on-demand, which is kinda similar to overcommit. It can result in programs terminating on any random function call (if that call requires allocating a new page of stack). https://learn.microsoft.com/en-us/windows/win32/procthread/t...

However this is very rare to happen in practice -- stack growth isn't very frequent in typical applications; so usually a malloc() will fail first in low-memory situations.

A Windows program can avoid the risk of getting terminated on stack growth by specifying equal commit and reserve sizes for the stack. So in least in theory, it's possible to write a windows program that is reliable in low memory situations.


But if you're on 64-bit; you can just create the threads with a huge stack size limit (e.g. 1GB) and let the OS handle automatically growing the actual stack size. No need to reinvent the wheel.

Stack size is the one area where Windows does have something like "overcommit" by default, as you can separately configure the reserved and commit sizes of the stack: https://learn.microsoft.com/en-us/windows/win32/procthread/t...


Yes, this approach works on other platforms too. However I believe that this will not deallocate stack space, while stacker will.


This just delays the problem though


That's no longer true: https://www.intel.com/content/www/us/en/developer/articles/t... x86 hardware does not guarantee anything unless you enable "Data Operand Independent Timing Mode", which is disabled by default and AFAIK can only be enabled by kernel-level code. So unless operating systems grow new syscalls to enable this mode, you're just out of luck on modern hardware.


In the most purely theoretical sense, you are correct, but everything Intel and AMD has said indicates they will still offer strong guarantees on the DOIT instructions:

https://lore.kernel.org/all/851920c5-31c9-ddd9-3e2d-57d379aa...

In other words, they have announced the possibility of breaking that guarantee years before touching it, which is something the clang developers would never do.


But does `printf();` return to the caller unconditionally?

This is far from obvious -- especially once SIGPIPE comes into play, it's quite possible that printf will terminate the program and prevent the undefined behavior from occurring. Which means the compiler is not allowed to optimize it out.


`for(;;);` does not terminate; yet it can be removed if it precedes an unreachability assertion.

The only issue is that writing to a stream is visible behavior. I believe that it would still be okay to eliminate visible behavior if the program asserts that it's unreachable. The only reason you might not be able to coax the elimination out of compilers is that they are being careful around visible behavior. (Or, more weakly, around external function calls).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: