When I was a teenager there always were a few used mags that we swapped or gave each other. If kids have smartphones, they'll resort to video sharing, and there are few ways to avoid it.
I'm not sure that's a good way to spend the taxpayer money.
Pretty safe until a machine in the network gets infected. The first infection comes from a phishing email or similar. From then on, the worm infects other machines connected to the same network, but usually not across the internet.
It uses a vulnerability in a protocol that's used for network sharing, and that's usually blocked at your router
How do you deal with optional fields in documents? do you modify the table schema on the run?
If there's a larg-ish number of optional fields, but each document has only or a few of them, would it create a sparse table with lots of columns? Did you find any problem in these scenarios?
So basically yes. ToroDB creates columns and tables on the fly. All the added columns are nullable, so the ALTER TABLE ADD COLUMN is an almost free operation in PostgreSQL.
Sure, sparse tables are created. This is not a problem since nulls in PostgreSQL are quite cheap (they require no or a few bytes of storage per record).
Even if there is a high cardinality of optional fields, we have not seen in real cases that the number of columns goes beyond a few hundred. And that's perfectly manageable by PostgreSQL :)
There might be some pathological, degraded use cases. But we have found none of them on real datasets.
And Hewson Consultants did the same in their "interactive video adventure" Avalon circa 1984. The game -for the Sinclair Spectrum- asked for a four digit code printed in non-copy blue that came in the box. Quite a few games would do the same later, like Larry's, monkey island or Elvira, Mistress of the dark, with different approaches, but Avalon's was pure blue over white paper.
Now combine Hyperloop with self driven cars that you can take at your convenience in both ends to arrive to your destination and most of the problems you stated seem easy to overcome.
Probably in a few years, when the hyperloops are ready, owning a car won't be as usual as now, but you'll be able to pick a self driven one paying per mile or a fixed per monthly quota.
Being a user of both Linux and PostrgreSQL, I'm very interested in this issue, but I only understand some of the words...
Could everybody wiser than me tell me if I should be concerned and the possible implications of these decisions? Should I invest in alternative platforms?
In NUMA front Linux is, as far as I know, leaps and bounds ahead of FreeBSD, Solaris and Windows. All four platforms offer ways to tune process and memory allocations by hand, but Linux is only platform that puts lot of work to make automatic NUMA scheduling actually work. It is actually pretty difficult problem. Something to read if you are interested:
Back in ye olden times when I was an Informix DBA, we had to worry about stuff like this for storage.
It was always a fight with the storage guys, because they wanted to use their fancy Veritas File System for optimizing disk utilization, and us prima-donna DBAs wanted raw LUNs and allow the database engine to manage our disk, because it maximized our transaction throughput. Some DBAs even wanted whole disks allocated, so they could control where data lived from a disk geometry POV. There were (mostly) valid arguments for doing this, most of which have gone away over the years.
This is an issue like my disk issue -- corner cases that need to be thought about in situations where you are investing lots of engineering effort into your databases. If you don't have a couple of angry DBAs whom you're always arguing with, you don't need to worry about this.
It is something that could be helpful to have at the back of your mind if you're tasked with optimizing postgres and OS settings for big workloads.
But it's one of a myriad of little things, not something that could inform a platform decision. It's much more interesting for kernel devs than it is for postgresql users.
I think what this shows is that issues related to interactions between RAM, caches and CPU cores are becoming a lot more complex on all platforms.
First issue is relevant only on systems that have more than one NUMA node, which is probably every meaningful physical server and essentially no VM (at least on Xen, multiprocessor VMs are single NUMA node), as it does not make much sense to advertise NUMA topology to guest VMs.
Second issue is relevant for postgresql mostly only if you use very large shared_buffers which anyway is not recommended for general workloads. Writing page that exists on disk and was not read short time before is not especially common thing to do.
NUMA can absolutely ping you in virtual servers, but without access to the hypervisor you'll never know why it's happening (JVMs straddling NUMA regions have caused me pain in the past, when the guest was split across memory regions).
The point is that kernel inside VM guest knows nothing about NUMA, so it cannot do any kind of NUMA optimizations hence such optimizations cannot hurt performance as they do not happen at all.
I'm not sure that's a good way to spend the taxpayer money.