Cookies are not arbitrarily sent to any server. If Google has a separate subdomain they use for authentication (say login.google.com), they can instruct your browser to only send the relevant cookie to that subdomain.
Good point, though it sounds like it'd very challenging to train users to notice the absence of a special image... especially when it's normal for that image to disappear whenever they use a different browser or clear cookies.
Imagine two threads that safely lock a resource, increment it, copy the value, and then unlock it. This is entirely defined, but also a race condition.
Race conditions are a lot like unsanitized input. They don't cause problems by themselves, but if you make incorrect assumptions it's easy to write incorrect code.
In Rust, (right now) you can choose between a 1:1 or a N:M mapping between OS threads and Rust tasks. With N:M threading, the runtime necessarily does (some of) that internally.
My experience with cgroups is that it's incredibly difficult to get them to do what you want them to do. But systemd seems to be changing that, so maybe their use will get more mainstream soon.
If one of those scripts forgets to kill a process's children (because the process itself was supposed to handle that), and a server ends up with a bazillion orphaned processes, then it could well become your problem!
> I don't know how other similar languages deal with GUI. It's always sort of hairy, obscure code when you look at it closely.
The best languages for GUIs I've seen are specialized declarative DSLs. Things like QML, XAML, JavaFX FXML, XUL, and even HTML. Anything else very much sucks, in my experience.
And Qt pre-QML. The user interface is declaratively described in XML (which can be edited visually with Qt Designer). You then use a utility (uic) to generate e.g. C++ class which you can use through composition and inheritance. UI events are wired to code through Qt's signal/slot system.