You'd be hard pressed to convince me that Windows model for locking files is superior to what Unix offers, at least as far as file deletion goes. Conceptually speaking, it's pretty simple:
* Files are blobs of storage on disk referenced by inode number.
* Each file can have zero or more directory entries referencing the file. (Additional directory entries are created using hard links.)
* Each file can have zero or more open file descriptors.
* Each blob has a reference count, and disk space is able to be reclaimed when the reference count goes to zero.
Interestingly enough, this means that 'rm' doesn't technically remove a file - what it does is unlink a directory entry. The 'removal' of the file is just what happens when there are no more directory entries to the file and nothing has it open.
In addition to letting you delete files without worrying about closing all the accessing processes, this also lets you do some useful things to help manage file lifecycle. ie: I've used it before in a system where files had to be in an 'online' directory and another directory where they were queued up to be replicated to off-site storage. The system had a directory entry for each use of the file, which avoided the need to keep a bunch of copies around, and deferred the problem of reclaiming disk storage to the file system.
> Now replace the contents file being worked on or delete it, in the context of a multi-process application, that passes the name around via IPC. ...
Sure... if you depend on passing filenames around, removing them is liable to cause problems. The system I mentioned before worked as well as it did for us, precisely because the filenames didn't matter that much. (We had enough design flexibility to design the system that way.)
That said, we did run into minor issues with the Windows approach to file deletion. For performance reasons, we mapped many of our larger files into memory. Unfortunately, because we were running on the JVM, we didn't have a safe way to unmap the memory mapped files when we were done with them. (You have to wait for the buffer to be GC'ed, which is, of course, a non-deterministic process.)
On Linux, this was fine because we didn't have to have the file unmapped to delete it. However, on our Windows workstations, this kept us from being able to reliably delete memory mapped files. This was mainly just a problem during development, so it wasn't worth finding a solution, but it was a bit frustrating.
If you rm a file that someone else is reading, they can happily keep reading it for as long as they like. It's only when they close the file that the data becomes unavailable.
I know how inodes work, thank you. My first UNIX was Xenix.
Applications do crash when they make assumptions about those files.
For example, when they are composed by multiple parts and give the filename for further processing via IPC, or have another instance generating a new file with the same name, instead of appending, because the old one is now gone.
Another easy way to corrupt files is just to access them, without making use of flock, given its cooperative nature.
I surely prefer the OSes that lock files properly, even if it means more work.
> For example, when they are composed by multiple parts and give the filename for further processing via IPC
Of course, the proper way to do this in POSIX is to pass the filehandle.
POSIX is actually a pretty cool standard; the sad thing is that one doesn't often see good examples of its capabilities being used to their full extent.
For this, I primarily blame C: it's so verbose and has such limited facilities for abstraction that it's often difficult to see the forest for the trees. Combine that with a generation of folks whose knowledge of C dates back to a college course or four, in which efficient POSIX usage may not have been a concern, and one finds good, easy-to-read examples of POSIX harder to find than they really should be.
Unfortunately POSIX also shares the same implementation defined behaviour of C.
I had my share of headaches when porting across UNIX systems.
Also POSIX is now stagnated around support for CLI and daemon applications. There is hardly any updates for new hardware or server architectures.
Actually, having used C compilers that only knew K&R, I would say POSIX is the part that should have been part of ANSI C, but they didn't want a large runtime tied to the language.
After working as a Windows admin for a few years, I feel that it is one of those things that sound like a great idea at first, but are causing more problems than they prevent.
But years as a Unix user might have made me biased. I am certain people can come up with lots of stories about how a file being opened prevented them from accidentally deleting it or something similar. I am not saying it is a complete misfeature, just a very two-edged sword.
It looks to me like the "can't replace/delete a file that is opened" is one of the factors that causes the malware phenomenon in Windows. That is, you must reboot to affect some software updates. Frequent reboots meant that boot sector viruses were possible.
That policy also means that replacing some critical Windows DLLs means a very special reboot, one that has to complete, otherwise the entire system is hosed.
Sure, boot sector viruses existed since CP/M days - those systems were almost entirely floppy-disk based, and required a lot of reboots. Removable boot media + frequent reboots = fertile environment for boot sector viruses.
To be honest that sounds like a pretty bad set up if you need to manually delete files when you know there's a chance that the system is not only operating on them, but also not stable enough to handle exceptions arising from accessing them.
But arguments about your system aside, you could mitigate user error by installing lsof[1]. eg
$ lsof /bin/bash
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
startkde 1496 lau txt REG 0,18 791304 6626 /usr/bin/bash
bash 1769 lau txt REG 0,18 791304 6626 /usr/bin/bash
Well, on Linux distros it is not uncommon for individual packages to be updates as updates become available. So if there is an update to, say, the web server, it is sufficient to restart the web server.
On BSD systems, kernel and userland are developed in lock step, so it's usually a good idea to reboot to be sure they are in sync.
Usually you'd be running the same userland daemons on FreeBSD that you might on Linux. Web servers, databases, OpenSSHd, file networking protocols (FTP, SMB, NFS, etc), and so on aren't generally tied to a particular kernel version since ABIs are not subject to frequent changes (it would be very bad if they were).
So with that in mind, you can update your userland on FreeBSD without updating your kernel in much the same way as you can do with Linux. Though it is recommended that you update the entire stack on either platform.
Updating the FreeBSD base (to the next major release) without updating the kernel is Not a Good Idea. Backward compatibility is there, down to version 4.x, but no one guarantees forward compatibility!
Applications will usually work, but the base system will likely break in some places.
It's not at all strange. Security updates to publicly-facing services should be applied as fast as possible. Kernel vulnerabilities are a whole different attack surface
Kernel vulnerabilities can be combined with user space vulnerabilities. eg a buggy web script might allow an attacker shell access under a restricted UID. The attacker could then use a kernel vulnerability to elevate their permissions to root.