Hacker News new | past | comments | ask | show | jobs | submit login

So now we have, Gnome-specific config storage, KDE-specific config storage, and the rest all uses its own config storage.

No. If Gnome-specific and KDE-specific config did not exist, we all would have standard ini-style config files sitting in /etc (for system/root) and in ~/.gnome and ~/.kde respectively with each program owning a file named after itself.

Also, the biggest problem with windows registry isn't its internal structure: users don't care about that. The issue with registry is that it grows over time and never shrinks due to buggy and forgotten programs that leave junk in there. That's #1 reason why people complain about their computers getting slower. GConf happily inherits this fatal flaw because find ~/.gconf/ -name "program" won't find anything.

Again: gconf-tool is an example of a non-standard little turd that I was referring to when expressed my annoyance with managing multiple configuration systems. And how is placing ~/.gconf under git helps me to quickly restore configuration for selected apps? Have you looked inside of .gconf? Have you seen wifi.conf file in there? Those who have would no doubt pick /etc. One more time: gconf is not find/awk/grep/sed friendly, hence it is a maintenance nightmare.

Yes. Why not run a daemon? It eats maybe a few hundred KB of private RSS.

How typical for a 2010 technologist: "I want this little tiny toy feature which will eat only a few insignificant hundred KB...". You know what? Users have hundreds of little tiny little needs that would enjoy running all time time. I can keep naming things forever that a computer is capable of doing, and if you multiply those hundreds by your "few hundred KB" very quickly you'll end up with something like Vista: functional yet barely capable of running itself. Moreover, are you sure gconfd eats hundreds of K? Care to look again?

Sending configuration change notifications is NOT a significant feature to have its own daemon. Even Windows is smart enough to run a single svchost.exe service that groups tens of little background features like that into a single process.

No, I think your fear for 2010 technology and your tendency to stick with 1970s stuff is more of a disease.

The industry seems to be voting my way: it is simple 1970s pieces of Linux tech that dominate competition, yes - those that read their config from ini files in /etc. And your 2010 "pluggable technology" is still waiting for a year of Linux desktop.




> And how is placing ~/.gconf under git helps me to quickly restore configuration for selected apps? Have you looked inside of .gconf?

Yes - it contains the hierarchy mapped to directories. In my case `.gconf` contains directories `apps`, `desktop`, `system`. Then `apps` contains a subdirectory for every application using gconf - for example `ekiga`. You can revert the chosen directory there to revert the version for a single application. I honestly don't know what problems are you seeing.

> Have you seen wifi.conf file in there?

No, I don't have it. All my configs are stored in respective app's directory in a `%gconf.xml` file. Are you sure that wifi.conf is managed by gconf? Maybe it's just some random file that was put there by a broken app?


Standard ini files: ...what? Take a good look at all non-GNOME and non-KDE config files. Which one of them are ini files? I dare you to point even 3 non-GNOME non-KDE apps that use the same config file format!

Not removing configuration: yes, GConf doesn't remove old configuration automatically. Guess what, neither does everything else. Uninstall nano and /etc/nanorc will happily stick around until you manually remove it.

'find' not finding anything: I guess you didn't use find correctly:

    bash:~/.gconf$ find -name 'gedit*'
    ./apps/gedit-2
    ./apps/gnome-settings/gedit
Memory usage: straw man. You would be right if there are hundreds of daemons each eating a few hundred KB, but there aren't: there are only a hand full such daemons. Okay, so gconfd-2 eats 2.3 MB of private RSS. I was wrong. But still not bad since there's only one instance per user. However the comparison with Vista is totally straw man: fact is all those daemons together eat nowhere as much as memory as Vista. They provide useful services for a reasonable amount of memory, so all is well. Memory usage is a trade-off, zero memory is only possible if there are zero features; if you want to have all your RAM available why don't you go use your bash shell with no 'ls' or anything else installed?

Grouping services together into a single process: yes, you can save memory, but is it worth to? Suppose we have a low memory system with only 128 MB of memory. There are about 10 GNOME daemons, each with a private RSS of between 500 KB and 2 MB. I'd say the total memory consumption is 12 MB. Suppose you group them together to save the process overheads, and save 30% of memory. Memory usage has gone down to 8.4. Compared to the 128 MB of RAM you've saved 2.8%. Whohoo! Now was that worth it? I think developers have better things to do with their time than to save 2.8% memory on a 128 MB system from years ago.

As hardware capacities continue to rise, the point of diminishing returns draws closer and closer: soon the developer will have to spend 100 hours of time just to reduce the already tiny 10 MB memory usage by 2%. All this to please old-gregg who would otherwise continue to complain. Now do you honestly think all that labor is worth it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: