Hacker News new | past | comments | ask | show | jobs | submit login

Right. You should always test the command first. If the data is critical, use a temporary file instead. I usually use this in scripts so I don’t have to deal with cleanup.



> If the data is critical, use a temporary file instead

Use a temporary file always. Sponge process may be interrupted, and you end up with a half-complete /etc/passwd in return.


Couldn't `mv` or `cp` from the temp file to `/etc/passwd` be interrupted as well? I think the only way to do it atomically is a temporary file on the same filesystem as `/etc`, followed by a rename. On most systems `/tmp` will be a different filesystem from `/etc`.


mv can't, or, more correctly the rename system call can not.

rename is an atomic operation from any modern filesystem's perspective, you're not writing new data, you're simply changing the name of the existing file, it either succeeds or fails.

Keep in mind that if you're doing this, mv (the command line tool) as opposed to the `rename` system call, falls back to copying if the source and destination files are on different filesystems since you can not really mv a file across filesystems!

In order to have truly atomic writes you need to:

open a new file on the same filesystem as your destination file

write contents

call fsync

call rename

call sync (if you care about the file rename itself never being reverted).

This is some very naive golang code (from when I barely knew golang) for doing this which has been running in production since I wrote it without a single issue: https://github.com/AdamJacobMuller/atomicxt/blob/master/file...


Not clear on need for fsync and sync.

Are those for networked like NTFS or just as security against crashes.

Logically on a single system there would be no effect assuming error free filesystem operation. Unless I'm missing something.


Without the fsync() before rename(), on system crash, you can end up with the rename having been executed but the data of the new file not yet written to stable storage, losing the data.

ext4 on Linux (since 2009) special-cases rename() when overwriting an existing file so that it works safely even without fsync() (https://lwn.net/Articles/322823/), but that is not guaranteed by all other implementations and filesystems.

The sync() at the end is indeed not needed for the atomicity, it just allows you to know that after its completion the rename will not "roll back" anymore on a crash. IIRC you can also use fsync() on the parent directory to achieve this, avoiding sync() / syncfs().


> ext4 on Linux (since 2009) special-cases rename

This is interesting.

The linked git entry (https://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.g...) from the LWN article says "Notice: this object is not reachable from any branch."

Did this never get merged because I definitely saw this issue in production well after 2009.

I guess it either got changed, or, a different patch applied but perhaps this https://github.com/torvalds/linux/blob/master/fs/ext4/namei.... does it?


The patch just got rebased, here's the one that was actually applied in master for v2.6.30: https://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.g...

And yes, the code you highlighted is exactly this special-case in its current form. The mount option "noauto_da_alloc" can be used to disable these software-not-calling-fsync safety features.


I'd like to know why as well. The inclusion of the fsync before the rename implies to me that the filesystem isn't expected to preserve order between write and rename. It could commit a rename before committing _past_ writes, which could leave your /etc/passwd broken after an outage at a certain time. I can't tell whether that's the case or not from cursory googling (everybody just talks about read-after-write consistency). Maybe it varies by filesystem?

The final sync is just there for durability, not atomicity, like you say.


> the filesystem isn't expected to preserve order between write and rename

Correct.

The rename can succeed while the write of the file you just renamed gets rolled back.


You can use `/etc/passwd.new` as a temporary file to avoid the problems you mentioned. In the worst case, you'll have an orphaned passwd.new file, but /etc/passwd is guaranteed to remain intact.


Probably not. If it's implemented responsibly, it will internally:

1. Write to a temporary file 2. Do the equivalent of mv tmpfile originalfile

so it will either succeed or do nothing


"Responsibly" is subjective here. I could argue that responsible thing to do is to use as little resources as possible, and in that case, directly overwriting the file would be the "responsible" thing to do.


> I could argue that responsible thing to do is to use as little resources as possible

No, you couldn't, because a sponge is intentionally using more resources: It soaks up as much want as it can. And the program is intended to soak up all of the output. Otherwise it would be `cat`.


Your example proves my point: what's responsible is subjective. It's meaningless to talk about doing the "responsible" thing.


This is why I usually just use a temporary directory and do a quick

    git init .
    git add .
    git commit -m "wip"
... and proceed from there. So many ways to screw up ad hoc data processing using shell and the above can be a life saver. (Along with committing along the way, ofc.)

EDIT: Doesn't work if you have huuuuge files, obviously... but you should perhaps be using different tools for that anyway.


You might like to try using src, a simple single-file VC: http://www.catb.org/esr/src/


I guess if you want something single-file that resembles git (now thinking better, not sure if a requirement at all), you can also try Fossil ( https://www2.fossil-scm.org ).


This is why I read HN. You never know when a brilliant idea will appear. Thank you. I never thought of doing this for temporary work


Why not write it to a different file


  cmd < somefile | somefile.tmp && mv somefile.tmp somefile
Will read from somefile, and only replace the source if (and when) the pipleline exits successfully.

Mind that this may still bite in interesting ways. But less frequently.

You can also invoke tempfile(1) which is strongly recommended in scripts.

https://www.unix.com/man-page/Linux/1/tempfile/


I was wondering what the difference between tempfile and mktemp was. At the bottom of the tempfile man page, it says:

> tempfile is deprecated; you should use mktemp(1) instead.


I ... need to revisit that. Though I suspect you're right.


How do you know you've tested all cases?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: