Hacker News new | past | comments | ask | show | jobs | submit login

You'd be hard pressed to convince me that Windows model for locking files is superior to what Unix offers, at least as far as file deletion goes. Conceptually speaking, it's pretty simple:

* Files are blobs of storage on disk referenced by inode number. * Each file can have zero or more directory entries referencing the file. (Additional directory entries are created using hard links.) * Each file can have zero or more open file descriptors. * Each blob has a reference count, and disk space is able to be reclaimed when the reference count goes to zero.

Interestingly enough, this means that 'rm' doesn't technically remove a file - what it does is unlink a directory entry. The 'removal' of the file is just what happens when there are no more directory entries to the file and nothing has it open.

https://github.com/dspinellis/unix-history-repo/blob/Researc...

In addition to letting you delete files without worrying about closing all the accessing processes, this also lets you do some useful things to help manage file lifecycle. ie: I've used it before in a system where files had to be in an 'online' directory and another directory where they were queued up to be replicated to off-site storage. The system had a directory entry for each use of the file, which avoided the need to keep a bunch of copies around, and deferred the problem of reclaiming disk storage to the file system.




> You'd be hard pressed to convince me that Windows model for locking files is superior to what Unix offers,

This model is not unique to Windows, rather most non POSIX OSes.

I happen to know a bit of UNIX (Xenix, DG/UX, HP-UX, Aix, Tru64, GNU/Linux, *BSD).

Yes, it is all flowers and puppies when a process deals directly with a file. Then the only thing to be sorry is the lost data.

Now replace the contents file being worked on or delete it, in the context of a multi-process application, that passes the name around via IPC.

Another nice one are data races by not making use of flock() and just open a file for writing, UNIX locking is cooperative.


> This model is not unique to Windows, rather most non POSIX OSes.

You could also point out that the hardlink/ref-count concept is not unique to POSIX and is present on Windows.

https://msdn.microsoft.com/en-us/library/windows/desktop/aa3...

> Now replace the contents file being worked on or delete it, in the context of a multi-process application, that passes the name around via IPC. ...

Sure... if you depend on passing filenames around, removing them is liable to cause problems. The system I mentioned before worked as well as it did for us, precisely because the filenames didn't matter that much. (We had enough design flexibility to design the system that way.)

That said, we did run into minor issues with the Windows approach to file deletion. For performance reasons, we mapped many of our larger files into memory. Unfortunately, because we were running on the JVM, we didn't have a safe way to unmap the memory mapped files when we were done with them. (You have to wait for the buffer to be GC'ed, which is, of course, a non-deterministic process.)

http://bugs.java.com/view_bug.do?bug_id=4724038

On Linux, this was fine because we didn't have to have the file unmapped to delete it. However, on our Windows workstations, this kept us from being able to reliably delete memory mapped files. This was mainly just a problem during development, so it wasn't worth finding a solution, but it was a bit frustrating.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: