Hacker News new | past | comments | ask | show | jobs | submit login

> I see no benefit.

The first benefit is that it removes clutters from your $HOME.

The second benefit is that you can now manage and backup your settings in a sane way.

* ~/.config contains config files (should not be lost, but if lost you can recreate them);

* ~/.local contains user data files (save them often, never lose them for they are not replaceable);

* ~/.cache contains cached information (can be tmpfs mounted, you can delete it any time you want, no loss in functionality, you just lose some optimization);

* ~/.run is for temporary system files (must be tmpfs mounted or it must be cleaned during at shutdown or power on).

Luckily most of the apps used on Linux systems now use it, you are probably using Mac OS X.




All of your points are invalid for a simple reason: Almost no software uses this fancy standard.

And for the backup-case: Whitelisting is usually a futile idea to begin with. Normally you'd prefer to backup the odd superfluous file rather than miss an important one.

Luckily most of the apps used on Linux systems now use it

Excuse me?

  $ find ~ -maxdepth 1 -name ".*" | wc -l
  228

  $ find ~/.local | wc -l
  4

  $ uname
  Linux


I counter your anecdote with my anecdote:

  $ find ~ -maxdepth 1 -name '.*' | wc -l
  354
  $ find ~/.local/ -maxdepth 1 | wc -l
  3
  $ find ~/.local/share -maxdepth 1 | wc -l
  66
  $ find ~/.config/ -maxdepth 1 | wc -l
  108
  $ find ~/.cache/ -maxdepth 1 | wc -l
  803
  $ uname
  Linux
  $ lsb_release -d
  Description:	Ubuntu 12.04 LTS


That's interesting, and sort of disturbing.

My box is not a desktop, so that's probably the difference. I still find that scheme an atrocity.

When going to that length they could at least have settled for one directory (~/.appdata or whatever). Half-baked is the most polite description I can come up with.


But all those folders are different, so a single one would be annoying (or: require two layers.)

.config can be posted online, and shared with others (like the many "dotfile" repos you'll see on github)

.local needs to be backed up, and may have private data.

.cache can be blown away (or tmpfs.)

.run MUST be blown away on restart.

This is simple, sane, and works well.


Yes, if you push me like that I'll say it: it's incompetently overdone.

When your goal is to "reduce clutter" then 2 layers would be the minimum. You make another 4(?) folders in my home-directory and call that reducing clutter?

And when I delete an app then I have to look in all of them? That is just utterly backwards for no conceivable reason.

Due to the semantics you now suddenly need a cronjob or similar abomination that traverses all home-directories and picks out stuff ("MUST" be blown away). This will by definition be fragile and have funny corner-cases in the first few iterations. Also what happens when ".run" is not blown away, like on a system that does't implement this nonsense?

The definitions are blurry and complex, many apps will get them wrong (.local vs .config etc.).

Unix already has a location for temp files. It's called /tmp.

And what the heck is going in .local anyways? When the user saves a file then he pretty surely doesn't want it buried under some dot-directory.


When your goal is to "reduce clutter" then 2 layers would be the minimum. You make another 4(?) folders in my home-directory and call that reducing clutter?

I can see a clear and very useful difference between RUNTIME_DIR, CACHE_DIR, and CONFIG_DIR. Consider a scenario where $HOME is on a networked filesystem. RUNTIME_DIR has to be outside that, and local to the machine's namespace. That's because it references things inherently local to the machine. That is, pids and pipes. These wouldn't make sense on any other machine and will just make the application's job harder.

I also set CACHE_DIR to be local (/tmp/$USER.cache.) That's because caching performs terribly when it's flying over the network. Chrome is the main culprit for me. It also fills my file quota with hours of using it. However, it's still useful to keep that data in the medium term.

CONFIG_DIR and DATA_DIR, however, don't seem to be very different to me. I can't imagine a scenario where I want one but not the other. I might be using the wrong sort of applications. (For the record I have 8 files in .config, 10 dot files, and just 1 in .local/share.)

Due to the semantics you now suddenly need a cronjob or similar abomination that traverses all home-directories and picks out stuff ("MUST" be blown away).

Having RUNTIME_DIR on a tmpfs, like what most distributions do with /var/run, solves that problem. I map mine to /var/run/$user, even though I've yet to see an application actually use it. The spec BTW doesn't even specify the default value!

And what the heck is going in .local anyways? When the user saves a file then he pretty surely doesn't want it buried under some dot-directory.

I agree, the default values are silly. This, like most of Modern Unix, is an ugly hack which makes dealing with the rest of the ugly hacks a bit easier. If you want an elegant solution you'll probably have to throw away most of what was added during last 20 years. May I suggest starting with sockets?


   % uname
   Linux
   % find ~/.local | wc -l
   16824
   % find ~ -maxdepth 1 -name ".*" | wc -l
   279


> % find ~/.local | wc -l

Missing something?


> The first benefit is that it removes clutters from your $HOME.

Invisible clutter? That's a strange concept.

But the rest of your point indeed makes sense. It still is easier and probably comment to backup the whole $HOME. But those points you can see as a benefit, though not obvious.


As Rob mentions in his post, the more dotfiles there are in $HOME the slower path resolution for any subfiles becomes. How do we navigate to ./src? We open the directory and read all the entries until we find the one called "src". What happens if we encounter a morass of dotfiles beforehand? src takes a while to find. The clutter may be invisible to you, but it does gum up the works.


For what it's worth, most modern file systems (JFS, XFS, ext4, reiserfs, btrfs, ...) have logarithmic (or better) directory lookup times. This is achieved using hashes and b-trees (or even hash tables).


Fair point. Though anything using the standard POSIX dirent API would still get the performance hit (even if path resolution doesn't).


Unless you have many thousands of files, I can't imagine you would ever notice a slowdown.


It's not invisible when you're actually looking for an invisible file.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: