Some don't maintain their packages. If you re-install every few months you keep you system up-to-date. If this sounds strange to you, then realize that most LFSers are accustomed to running new builds frequently.
Personally, I tracked file/package relationships with a simple script. Easy to do.
Tracking package dependencies is more difficult, and I usually did this manually with some help with some specialized scripts.
Plus side is you can avoid dependencies you don't like and add functionality where you need it. You can make a developers heaven this way. Usually the only way I do serious Linux development is with an LFS-based system. When I use distros I usually spend a lot of time creating a specialized environment and the time saving in using a distro is sort of nullified.
Bad side is obviously if you want to upgrade a vital dependency you'd have to upgrade all forward dependencies also. Not so difficult if you have it scripted, but errors are hard to handle. Of course, an LFSer would usually know how to install different versions of packages and have them co-exist. However, mostly you just want to keep top-level packages updated, which is easy to do (e.g. Firefox, developer tools).
Although there's little documentation to speak of, so I can't tell whether it helps with upgrades.
If you want to build from source and tinker with compile flags but also be able to upgrade seamlessly, Gentoo or Arch Linux is probably more appropriate.
I am using LFS as a primary OS for Linux development and on servers. I can tell my story how I ended up with LFS in the forest of Linux distributions.
Some time ago when I started to learn Linux I have tried a lot of different distributions. All of them had their quirks. Some were buggy. Some were slow. Some had extremely weird and bloated configuration systems. Imagine several layers of abstraction above plain standard configuration files when you have to learn everything basically from the beginning with a different distribution. For example on top of standard config files they may have some configuration system (like YaST in SUSE) on top of which the GUI sits (so 3 layers just to change the screen resolution). Many distros have their own "unique" way of configuring the system. And when something will brake you will have no idea where the problem might be. Is it a bug in the package or in some layer of configuration system or somewhere else? I understand that they try to make the system more user friendly. Well, every one has its own niche.
With LFS you just learn the configuration of basic packages once. The way that package creator intended. Plain text files and nothing more. As an additional bonus you have great performance and security because you only have the packages you need. Plus everything is optimized for your architecture. If you have ever used some distribution in a virtual machine on a slow notebook you know what I am talking about.
When I built LFS for the first time I did everything by hand. It was a very useful experience and a nice way to learn. Of course now I don't do it manually. You can use automated tools (jhalfs for example) which extract instructions from the book and do everything for you. I don't use any graphical environment so no need to build a lot of stuff beyond the base system.
Of course LSF has its own issues. Lack of package management is one of them. But if you like minimalistic and fast systems you should give LFS a try.
completely agree with your reason. I'm a long time Slackware user. I discovered Slackware cicra 1995 or so and throught out the years tried different distributions only to go back to Slackware for its simplicity. Many of the O'reilly books talk about the contests of various config files for NAMED, DHCP, etc..... and Slackware just does not deviate from these. Outside of the Slackware installation, upgrading new packages can be done from the raw tar-balls. Never having to wait or build RPMs or other cryptic incarnations just made things easier, faster, and me a better Linux admin when I had to diagnose issues in other distributions.
Gentoo is more like automated LFS with package dependencies and optional binary packages that you build yourself (to distribute to other machines you control).
I largely agree with your points, but you picked one of the least relevant distros for your example? One of the things I've always liked about SuSE/openSUSE is that it's actually very friendly to people who want to configure at different levels of hands-on versus GUI: the three layers of config files, /etc/sysconfig shell-style configs, and GUI interfaces in Yast are aimed at not stomping on your manual changes (unlike Ubuntu, say) and allowing you to mix & match.
The way it works is that config files are checksummed and if the file has changed from stock (or the last version the system touched) it'll disable GUI alterations (which mostly just changes sysconfig and runs the SuSEconfig program to apply changes). The exception to this is when it's a very simple file format (/etc/hosts, say) that has been marked as safe to parse & alter, then it reads in your changed version and presents that as the base. If I want to hack on my postfix configs I can a) use the Yast mail module which'll tweak /etc/sysconfig/{mail,postfix} and automatically run SuSEconfig (and thus /sbin/conf.d/SuSEconfig.postfix). There's no difference here between using the mail or sysconfig GUIs and using emacs on sysconfig files, then running it from the command line. If I then choose to go in and edit /etc/postfix by hand Yast will avoid stomping on my changes with its now-outdated version, but there are backups if I really mess things up and want to revert. For the same reason network config gives you the option to swap from NetworkManager to the old, familiar and entirely happy (for me!) config files in /etc. There are a small number of exceptions, but openSUSE is in my experience about the least aggressive of the Ubuntu/Fedora/openSUSE trio in config lossage while having the widest coverage of default config tools.
All this aside and despite having Ubuntu, Debian and openSUSE chroots on my HP TouchPad... I'm trying LFS on it as I write. For something where you really want to explore the architecture and have tight control it rocks!
Here's my use case: if you're not 100% sure how a Linux system is put together, and you want to learn The Hard Way (in a Zed Shaw sense), then put aside a weekend and go through as much of LFS as you can handle. Since it largely works from the bottom up, you can keep going with it until it starts to meet what you already know from the top down.
I'm not sure it's worth the hassle for an ongoing installation, unless you have specific requirements around a minimal tuned distro. But it's excellent for learning.
I've used it in the past to learn how a distribution is built from, well, scratch. :)
Nowadays distributions like Ubuntu allow you to install a full system with just a few mouse clicks, which is great for usability but not much use if you want to learn how everything works.
It's a great way to learn how Linux works. I don't know of anybody using a Linux they've built from scratch for anything other than their education, though.
I used Linux from Scratch as my regular operating system during two years. It was very fun to know that I built everything that I was using. The system was also good because only softwares that I really need were installed. But one sad day, I needed to format the hard drive. I wouldn't have the patience to build everything from scratch again. So, I installed Ubuntu. :-(
If I wanted to learn how to do it myself but still let it be repeatable, would it be a good idea to write a script or scripts that does it instead of running all the commands by hand?
i am really curious suppose i have machine X everything configured and i have a new machine Y same architecture(let 64bit) can i port from one two another.
You could if the new machine was very similar to the old one. In practice, the new hardware is better in some ways, so just copying everything will either not work at all or only work sub-optimally.
OTOH, your personal application configuration files (editor, window manager, etc.) can nearly always be copied as-is without causing any problems. Some people have been copying the same .emacs from machine to machine for years, if not decades, upgrading Emacs all the while.
Hm, apparently, there are at least two meanings of "how Linux works".
To learn how Linux works, I would recommend Linux internals, because from skimming the TOC, it seems similar to "UNIX internals, the new frontiers". The latter book does not mention Linux, but probably learns you more about what I would describe as "how Linux works" than this book.
This book is more of a 'how do you build a new distribution from scratch'. Doing that will learn you a bit about the userland tools and their dependencies. That can be fun to do and useful, but it is a different thing.
As others' mentioned, LFS is a great way to learn about Linux. After building LFS, I migrated to Gentoo. LFS shows you how everything is done manually, where Gentoo takes care of all that grunt-work via Portage. LFS is an excellent stepping-stone towards Gentoo (or any other package-managed distro), showing you how things are working under the hood.
http://linuxfromscratch.org/pipermail/lfs-dev/2011-October/0...