Donald Bitzer was the first Computer Science professor I had in college. He taught a discrete mathematics course (boolean logic).
Though he was on the older side when I took his course, he still brought laughter and enthusiasm to his classes, and set the tone for the rest of my college career. He will be missed!
Same. It was my first CS class after transferring to NCSU in 2000. I was pretty lost at times but he was super kind and patient whenever I went to office hours. He was an older professor at that time and it was always cool to see was still teaching well beyond when I graduated.
Ditto, I took his discrete math course at NCSU in 1998. It was mainly taught by Tiffany Barnes day to day (who was also nice and a great explainer), but Bitzer was often present and always smiling and jovial.
I really regret having spent so little time interacting with my professors though. I was one of those kids that spent the least amount of time in class possible, almost never going to office hours, aiming to get the course work out of the way asap so I could "have a life". So much wisdom and life/industry experience was concentrated on that campus and at my fingertips, but I totally took it for granted. Seeing his obit amplifies this feeling; I wish I had cared enough at the time to meet and know the guy.
Yeah, there were some other older professors at NC State who had clearly aged out of knowing the state of the art, but Bitzer was an enthusastic teacher who cared about engaging students, and still knew his stuff.
People may "prefer" simply replacing containers, but as some siblings mention, some applications might require more reliability guarantees.
Erlang was originally designed for implementing telephony protocols, where interrupted phone calls were not an acceptable side effect of application updates.
FWIW as soon as you start using containers you should be able to handle those containers spinning up/down. Pretty much the whole point of containers. At which point you don’t need to bother with code hot swapping since you already have a mechanism for newer containers to spin up while older ones spin down.
The sibling post “that’s how they update without downtime” is super naive. It is absolutely not how they do it.
If we to wedge how Erlang does hot code swapping into a container metaphor, then to get what Erlang does, you'd need to have a container per function call.
Given that it would be absurdly wasteful to use OS processes in containers to clone Erlang's code reload system, AnotherGoodName might take ten minute to watch Erlang: The Movie to get a better sense of the capabilities of that system. The movie is available from many places, including archive.org.
>If we to wedge how Erlang does hot code swapping into a container metaphor, then to get what Erlang does, you'd need to have a container per function call.
You have a container that responds to HTPP requests sitting behind a load balancer, then you spawn a new container and tell load balancer to redirect calls to the new one. From the point of view of whoever is calling the load balancer you have hot swapping. You may even separate containers into logical groups and call it microservices architecture. Or you can define a process as something having qualified name and a mailbox and is sending messages to other processes.
Now reasonable people may disagree about what's wasteful, but the market seems to tolerate places where adding a checkbox to a form is a half a year process involving five different departments and the market can't be wrong.
Sure, you can shut down and restart your entire application. You could do that back in 1990 without containers, too.
The thing is that Erlang does hot reload at a per-function (or -according to Hebert- sometimes more-fine-grained) level, so nuking the entire program and paying the cost to start it up again is not at all the same thing as -say- using a not-absurdly-priced AWS Lambda [0] or similar to get per-function hot reloading.
By the way, have you read "A Pipeline Made of Airbags"? If not, you should give it a read: <https://ferd.ca/a-pipeline-made-of-airbags.html>. It might be old news to you, maybe, but maybe not.
I didn't read that one before, but I share the sentiment. We can't have cool things and it was all dumbed down, so the worst case become a default mode of operation. This didn't happen specifically with hot reload in erlang, it happens all the time at all levels.
Erlang does have a mechanism that allows a module to control when it moves from the "old version" to the "new version" of its own code. Calls to the module with the fully qualified name (e.g. `module:function()`) will invoke the "new code" once it's loaded, but calls within that module using only function names (just `function()`) will continue to invoke the "old code".
If the portion of the app you were hot upgrading was an OTP process like a GenServer, you could theoretically wait for some sort of atomic coordination mechanism to make that fully qualified function call after the new code has loaded, at least in theory.
We use hot code reloading at my work, but haven't had a reason to atomically sync the reload. Most of the time it's a tmux session with `synchronize-panes` and that suffices. If your application can handle upgrades within a module smoothly, it's rare to have a need for some sort of cluster-level coordination of a code change, at least one that's atomic.
Came here to say this exactly - any nurse, PA, or other hospital staff person likely has heard of "Vocera" the hospital communication platform.
Beyond just accidental confusion, I'd be worried about real legal issues with trademark infringement. Trademarks primarily exist to prevent customers from being confused about which business they're interacting with, and this is a great example of the types of things they're trying to prevent. (I am not a lawyer, so take this with a grain of salt.)
As I was building up my understanding/intuition for the internals of transformers + attention, I found 3Blue1Brown's series of videos (specifically on attention) to be super helpful.
Marshall Brain's contributions to the entrepreneurship program more broadly were extremely significant. I never had him as a professor, but his influence on the program was clear, even to me.
Imagine how those of us who played Ingress (Niantic's first game) feel... We were tricked into contributing location data for the game we loved, only to see it reused for the far more popular (and profitable) Pokemon Go.
Why would anyone take issue with this? Asking as someone who tried both games at different points.
Niantic was always open with the fact that they gather location data, particularly in places cars can't go - I remember an early blog post saying as much before they were unbundled from Google. No one was tricked, they were just not paying attention.
They were pretty up-front about it bring a technology demo for a game engine they were building. It was obvious from the start that they would build future games on the same platform.
Right? I feel like I'm taking crazy pills here and on Lemmy, the whole point of ingress was that it was made to sell Google mapping data and point of interest data, that's why the game didn't have monetizing practices for so long (of course it started having them once all the data was sold but hey)
I'm with you and the previous commenter. People who feel "tricked" we're only fooled by their own blindness. Sorry, but then trying to garner sympathy for that is like being asked to feel bad for the stripper that takes her clothes off for money; they both 100% knew what they were getting into and no other reasonable expectations can be had from engaging in that situation.
Facebook has been around something like a decade, now? I forget the exact number, but it's been long enough that everyone should have learned their lesson at this point; if you are creating data, be it personal, geospatial or otherwise, by using a product expect that data to be used as a commodity by the makes of said product.
I'm not sure that rolling deployments guarantee you won't lose connections, depending on the type of connection. Imagine your customer is downloading a large file over a single TCP connection, and you want to upgrade the application mid-download.
With rolling deployments, your only choice is to wait until that connection drains by completing or failing the download. If that doesn't fit your use case, you're out of options.
If your application is an Erlang app, you could hot code reload an unaffected part of the application while the download finishes. Or, if the part of the application handling the download is an OTP pattern that supports hot code reloading (like a gen_server) you could even make changes to that module and release e.g. speed improvements mid download stream. This is why Erlang shines in applications like telephony, which it was originally designed for.
>With rolling deployments, your only choice is to wait until that connection drains by completing or failing the download. If that doesn't fit your use case, you're out of options.
one of the cool things about unix is (and perhaps windows can do this in the right modes, idk), the running copy of a program is a link to the code on the disk (a link is a reference to the file, without the file name). You can delete a running program from the disk and replace it with a new program, but the running copy will continue and not be aware that you've done that. You don't need to wait till the program finishes anything.
on an every day basis, this is what happens when you run software updates while you are still using your machine, even if your currently active programs get updated. you'll sometimes notice this in a program like Firefox, it will lose its ability to open new tabs; that's because they go out of their way to do that, they wouldn't have to if they wanted to avoid it, just fork existing processes.
Right, but in this example, to "pick up" the code after you have updated it, you still have to trigger a restart of the program somehow. Controlling that handoff can prove challenging if you're just swapping out the underlying binary.
> one of the cool things about unix is (and perhaps windows can do this in the right modes, idk), the running copy of a program is a link to the code on the disk (a link is a reference to the file, without the file name). You can delete a running program from the disk and replace it with a new program, but the running copy will continue and not be aware that you've done that. You don't need to wait till the program finishes anything.
An even cooler thing is the running code is just mmaped into memory. One of the nifty things about mmaped files is if you change the backing file, you change it everywhere.
Not my recommended way to hot load code, but it might work in a pinch.
unlink, replace, start a new one, have the old one stop listening does work for many things. Some OSes have/had issues with dropping a couple pending connections sometimes; or you have to learn the secret handshake to do it right. A bigger problem is if your daemon is sized to fit your memory, you might not be able to run a draining and a filling daemon at once.
It also doesn't really solve the issue of changing code for existing connections, it's a point in time migration. Easier to reason about, for sure, but not always as effective.
I've used a Samsung T5 SSD as my CacheClip location in Resolve and it works decently well! Resolve doesn't always tolerate disconnects very well, but when it's plugged in things are very smooth.