According to a post in the comments, the author is 18 years old which means that I've been using Linux longer than the author has been making use of the blood in his veins. But that does not mean that I didn't learn something from his post!
While the technical accomplishments and understanding are commendable, I think that the main takeaway for me is the excellent approach to learning. Keeping it simple and taking an open-minded approach to learning (as exemplified by his attitude toward systemd) is an approach that more people could stand to take. So often we complicate things in our heads to the point where they seem unknowable and so we assume that they are too difficult and we don't try. I have seen this in myself at times and I see it others as well.
I'll be sharing this post at work so that the people who I work with who don't really understand Linux so well can appreciate the approach that was taken and hopefully also glean something from the excellent write-up itself.
One benefit of learning old technology from youth... I've got tons of cruft in my head, and this "old technology" has been continually rebuilt the entire time I've been using it. Kids these days aren't gonna, I dunno, fry their hard drives because the IDE cable doesn't have a notch. One guy I learned Linux from was proud that he still shut down his computer with `sync;sync;halt` because of some nasty race condition or something that bit him once in the 80s. And despite using Linux since the last millenium, I'm still a total noob! So bring on that fresh knowledge!
Ha! this is specific to Solaris in the old days where a reboot command can in cases, corrupts the OS. This has long been fixed but personally use sync;sync;halt for Solaris as a matter of habit. It even spills over to Linux on my part because of kernel upgrades and just because of muscle memory, plus it's a good idea to run sync anyways. Now i feel old! :)
BTW, the 'sync;sync;sync' thing was a necessary command given, mosty during the late 70's and early 80's, to park the tape head. Unix didn't have a 'rewind tape' command, so many manufacturers instead decided that three rapid syncs in a row meant 'rewind tape'.
Disclaimer: had that muscle memory burned into me for decades, I still do it ...
To be perfectly honest, I inserted the semicolons because I wanted it to make at least a little sense... I wasn't paying close enough attention to the old-timer reminiscing to recall his precise keystrokes. But thanks for the link!
I've been doing this so long that it's sometimes what I type on the terminal when I'm just thinking about something else completely, planning my next piece of work or whatever. Like an idle or holding pattern.
> I think that the main takeaway for me is the excellent approach to learning
As in the well-known dictum by Richard Feynman, “What I cannot create, I do not understand,” successful design is also a powerful way to show that a design principle has been understood.
I've also been a Linux user for cough quite a long time and I love your approach. You are also a good communicator - you clearly understand that a blog posting for general consumption should be short, pithy, focussed and short. The internets have a very short attention span ... oh look squirrel.
Good skills Sir. At your age I couldn't find my arse with both hands 8)
its also refreshing to see someone read a couple articles, do a small project, and be happy with the basic understanding they have gained. people think you need to be an expert at everything you do, but having a working knowledge of lots of tools is great and this seems like a wonderful way to acquire that.
I completely agree. I pride myself in having a smattering of knowledge in a wide range of topics. As a result I can have an intelligent conversation nearly anyone on almost any topic. This in itself is extremely valuable.
I think you got downvoted for what may have been interpreted as bragging. Keep the good work and don’t wait for validation from others. While some acknowledgement does blow some wind in one’s sails there’s a trap of getting stuck in that pattern. Anyway, goodluck with your projects and the future looks very bright for someone like you.
This is exactly what I'm trying to do to get some of my rotting skills back up to date. I say "am trying"... more like "have been trying for ages but not finding time & motivation synchronously"! Partly it is failing because I end up thinking too big, spend so much time thinking what I _could_ do, eventually, then having no time to actually do any of it.
Qudos to the young'n' for doing what I'm talking at! That is a person with the ability needed to go far.
> The first thing I did was to read the entire Wikipedia page on UNIX. I also read this original paper written by Dennis Ritchie and Ken Thompson from 1974
We need more people like you. Kudos for having the attention span to do this. Enjoy your HN top spot, well deserved.
I find good Wikipedia articles with competent copy-editing, pure fact and no nonsense to be a good read - at least much more so that many pseudo-intellectually-sounding articles from a few outlets that get posted on HN regularly.
When I got into Unix in the 90's, the first thing I did was read all the man page for every command and tried to run as many of them as I could with as many options. Decades later it still pays off. If you want to get good at sometime, just dive in.
It does, and my own decision to pursue Linux (in the mid-1990s) was based in large part on seeing multiple cases of earlier proprietary tech knowledge (CPM, DOS, classic Mac, MVS TSO/ISPF, VMS, MS Windows) either be made obsolete or limited to specific hardware or system vendors.
By contrast, Unix, in its BSD and GNU variants, was vendow and hardware independent, and by the mid 1990s, Linux was pretty clearly its successor.
Editor / word-processing tools have been especially telling. I've used and often been expert at: DOS Edit, EDT and EVE (both VMS), MacWrite, Wordperfect, Wordstar, MS Word, AmiPro, Notepad, MS Write, Aplixware (an early Linux office suite), and multiple variants of Libreoffice (StarOffice, OpenOffice, NeoOffice, ...).
And then there's vi/vim, which I first cut my teeth on in the mid-1980s, and which I still use daily and learn new features and uses of all the time (some themselves new, often long-time capabilities). That's been an extraordinarily durable learning investment. (Emacs similarly so.)
Awk's another basic tool that's quite useful and ubiquitously available.
There have been hitches and inconsistencies: hardware changes invalidate old concepts (shutdown sequences as noted here), firewall, audio, and other subsystems have changed dramatically and multiple times. Physical vs. cloud server management is a very differrent head. Systemd is a major (and often disturbing) development.
Generally, though, the further you stay from GUI and vendor-specific tools, the more durable the knowledge. My preferred desktop environment is based on the 1980s NeXT and has remained almost wholly unchanged since the mid-1990s whilst GNOME and KDE underwent multiple revolutions and Xfce4 came into being. Windowmaker still does things none of the other three can touch, and my muscle-memory is well into its third decade.
As you get older, learning slows, but it's forgetting which becomes especially hard. Durable, incrementally-evolving tools minimise that pain remarkably.
Especially when you add Anxiety to the mix. I may know exactly what I should do and how I should go about it, but when it comes to doing I face an illogical amount of stress, so I elect to distract myself.
Projects like these is exactly what university use for teaching too! You're way ahead of the curve.
For example CMU also makes you write your own shell. The main focus is implementing job control like: jobs, fg, bg, and various signal handlers in order to understand how forking child processes work.
I see that your shell doesn't handle much other than cd so this might be a nice next step if you want to continue working on it. (I can't find a good public link for it but you can probably follow along using: http://www.cs.cmu.edu/afs/cs/academic/class/15213-s02/www/ap...)
Another more Linuxy-Thing to try: build a pam (authentication/authorization/etc) module to embed your own secret or come up with a fun/unique way to enter your password (e.g. you can not login unless you plugged your headphones in, etc :) )
This makes you very aware of the way login works in linux and teaches you how to compile your library as a shared object conforming to an interface. Of course, you should be aware that you just baked your own security mechanism, so use it that way and better avoid using it for anything sensitive :)
Playing with PAM is definitely worth-while (and don't forget to have a root shell open the whole time so you can fix it when you break things!) It really drives home exactly what the kernel's view of users is, and what is just conventions in userspace. I think PAM was the last piece of GNU/Linux to transform from magic to understanding for me.
Kind of a shameless plug, but you mentioned wanting to get better at Awk. I had that same desire and created a small course based on what I learned. The course got great feedback. I've gotten a lot of nice emails from people thanking me for making Awk understandable to them for the first time. I love getting those kinds of messages!
There is a video presentation, and a set of "challenges" you can use to incrementally get more complex with awk, starting from super simple.
Years ago, I wrote a filter that reversed lines and reversed characters within each line after padding with blanks. I ran a text file through it, printed the result, turned the paper upside down, and complained to a sysadmin that the printer was rotating each character in place. (I didn't keep them hanging for long.)
It's not exactly the same as "tac | rev" though. For a typical text file with each line ending in a newline...this recat tool outputs a newline first, and the last line is missing a newline. So it's a true reverse printing of last byte to first.
> Can’t knock it till you try it, so here I am trying to learn Bash scripting.
For bash scripting I found Julia Evans' zine bite size bash very interesting. Even though I had been scripting for quite some time I found hidden gems in there.
purely judging from your profile picture it seems that you're about half my age :) and I think it's excellent that you're doing a bunch of things - personal blog, linux, tutorial post... Just remember - most of the guys around here tend to do it for marketing reasons (i.e. we want to find a job and we're advertising our capabilities), and those projects might not be the most interesting things out there.
Bottom line is - just because everybody else is doing it, might be because they have different motives than you, and I hope you'll do/build firstly what's interesting and fun, and secondly what's financially viable.
Many services will auto-translate :) into something else, like a colourful unicode icon. If you prefer to keep your smiley looking like just a smiley, (: is an effective workaround.
There's also an upside-down emoji smiley with its own distinct mood. Hopefully services don't start translating (: into that, they're used quite differently!
I got hired at (probably about his age?) just due to side projects like this. If nothing, it shows motivation. So it's got a marketing reason nonetheless.
Isn't this the programmer equivalent of one of those old stories "just go talk to the manager and give him a firm handshake." All the recent research in hiring seems to point to trends that white board / hackerrank / leetcode style filtering are up and someone with hiring authority actually looking at a portfolio or code repo at an all time low.
Getting things on the front page of Hacker News remains a good way to get people to contact you. Source: have had people contact me with job-related things for that reason.
Just a couple days ago I was going through all kinds of things I made when I was 18. That was almost 20 years ago, and I had VB6 projects and a blog (we called them E/N sites, the word blog wasn't coined yet). The VB6 projects were AOL "progs"--things that interacted with AOL chat, mostly. For the blog, I started with html, then .shtml (server-side-includes), then discovered php. Built a whole CMS using flatfiles, which was the only thing I knew, then rebuild with a database...
I have full source of ALL of that!
I also discovered linux in that time, and one post mentioned running Mandrake 7.2--which I doubt was the first version I ran.
Those skills eventually turned into jobs. And some of my peers who were doing the same thing turned into lifelong, important friendships.
Anyway, just reminiscing. I had more fun computing in those few years than I've had in the whole time since. And even though it was for fun, it was easily some of the best-spent time in my life for $$$ in the future.
>Awk and sed are more of the UNIX magic that I have always thought was really cool, though I never really understood what they were used for.
I was so put off by their syntax that I didn't even bother to try to learn them when I was a newbie. If I had a time machine, I would certainly go back and tell myself to learn these tools and save plenty of time instead of always using vim/perl for every text processing problem. I've now written my own books (https://github.com/learnbyexample/scripting_course#ebooks) - might help you.
It is a simple page with links to the corresponding parts of the POSIX standard which makes it a lot easier to find the correct documentation when you need it.
For those who wonder what POSIX is: It is a standard for operating systems and one part is about shells. So if a shell whats to be POSIX compliant it has to support certain commands. And if you write POSIX commpliant scripts, they won't just run on Linux, but also on MacOS, other BSDs and other POSIX compliant operating systems. Bash supports POSIX, but if you use Bash extensions that are not part of POSIX, you might run into trouble with other shells (e.g. dash on Debian).
I really like the idea of using games to teach command line concepts, like the author did with grep and find. There have been several other ones that have popped up about using vim [1] and navigating a filesystem via cd/ls[2].
I've always wanted to make a unix sandbox environment, using FreeBSD jails provisioned from a webapp, that has little challenges and games to teach the basics of the command line all the way up to hosting a basic HTML website with Apache.
Is this something people would find useful? I've been thinking about implementations but I don't want to jump too far into it without validation that this would be something people would find helpful and interesting.
It sounds like a brilliant idea. I remember learning my way around vim and emacs from interactive tutorials that were basically just text files. I had no idea what I was doing, but it was fun to experiment.
If you implemented your game, I would definitely give it a try. More for fun than for the learning experience, but if I happen to learn something new along the way, I certainly would not complain. ;-)
One of the biggest improvements I saw in my Linux usage/admin was when I started trying to challenge myself to use the mouse as little as possible; it was absolutely eye-opening when I could navigate around my machine, do all of my everyday processes, and never really leave "home row". It also made VM'ing into a windows server a total pain because it was fairly quickly back to "clicky-clicky" navigation and processing (although win+R, tabbing and arrow keys could get you by about 90% of the time).
Great article, and great approach to learning the inner workings of the standard Linux suite!
I think maybe you are referring to those numbers people used to quote after mentioning a manpage (1) (5) and so on. I totally missed the boat on that one, and never learnt what these designations meant.
Nope :) The joke was referring to the fact that the GNU Project (and thus the Bash project) insists on writing their documentation in the Texinfo format.
IIRC for a while they used to have man pages that basically said “just read the Info pages instead”. This is not the case anymore, in Linux distros these days even the GNU tools have decent man pages.
Anyway, the Bash manual is actually written in the Texinfo format. It’s not so bad, to be honest :)
A large portion of them are just auto-generated from the `--help` output (and --help has always been fairly high quality for GNU tools). Perhaps with a few extra paragraphs sprinkled in.
I've found that GNU man pages are often an unhappy medium between the terseness of `--help` and the verbose explanations in the info manuals; and so the GNU man pages are almost never what I want. If I want to just quickly look something up, the man pages are too big and I should have used --help, and if I want to understand something they're too brief and I should have used info.
That said, the Bash man page is great (as is the info page). It's always amazed me that the Bash maintainer (Chet Ramey) goes through the trouble of maintaining two entirely separate documents with such detail and quality.
Amusingly "info bash" took me to the man page until a separate bash-doc package was installed. One day I'm going to write a script to automatically install missing "-doc" packages and learn to navigate info.
The numbers, as in "bash(1)" refer to sections of the manual.
"Info" is a different documentation format, most commonly used for GNU tools (including bash). The section numbers don't generally apply to info documents.
For some tools, the man and info documents have the same information. For some, the man page is a brief summary that directs you to the info documentation. For some, there's only a man page. (And for some there isn't even that.)
The (1) and (5) and so on are the section of "the manual" that is being referred to; "the manual" referring to the entire system manual, and the one command you're looking up being just one page in that manual (hence "man page").
As for what those sections are, `man man` tells me (my system gets this man page from the "man-db" project https://www.nongnu.org/man-db/ ; your system may use a man page of different authorship, but it should tell you roughly the same thing):
The table below shows the section numbers of the manual followed
by the types of pages they contain.
1 Executable programs or shell commands
2 System calls (functions provided by the kernel)
3 Library calls (functions within program libraries)
4 Special files (usually found in /dev)
5 File formats and conventions, e.g. /etc/passwd
6 Games
7 Miscellaneous (including macro packages and conventions), e.g. man(7), groff(7)
8 System administration commands (usually only for root)
9 Kernel routines [Non standard]
Sometimes the number has a suffix; for example, the "p" suffix as in (1p) or (3p) indicates that the man page is taken from POSIX (and therefore might be missing features that your version has; but also that everything in it can probably be relied on on other systems). Or (3perl) which indicates that it's a Perl library function instead of a C library function.
You can tell `man` which section you're looking in like `man 5 passwd`. This is useful for looking up passwd(5) which describes the /etc/passwd file, rather than passwd(1) which describes the `passwd` command to change your password. Or `man 2 signal` to describe the "signal" system-call instead of the signal(3p) libc function, or `man 7 signal` to get an overview and listing of signals.
When I took a programming class a student sitting next to me introduced me to Linux. I was like "what is that?" I honestly thought these kinds of machines were just in the movies. I loved how it boot in a black screen, text scrolling, and there was a penguin. Unlike today, there was no Ubuntu. I didn't have wireless at home.
It took a while to get Disks burned to install. I had struggles moving back and forth between two machines across rooms to look up "help" documentation from the internet. When it was all installed, nothing worked. Network cards, sound, etc. When my network card worked, it felt like heaven. When my screen resolution was fixed (x11 settings), felt like heaven. Repeat.
He had asked, do you really want to learn how it works? I said, yes, let's start there. I was very confused with all the Linux variants so he recommended one. Now I look back and unsure that is where I should have started. L.O.L.
That's roughly how I started, around 1999. I loved the all text console, multiple screens, and how the shell was essentially the same as my beloved Amiga OS. I went through countless distros - Corel, Suse, Mandrake, Debian, Red Hat, Caldera, Yellow Dog, Mandriva, and leagues of desktops and WMs... KDE, Gnome, IceWM, Xfce, xmonad, WindowMaker, Enlightenment, BlackBox, OpenBox, LXDE, Fluxbox. I'm not sure what effect this total lack of interface consistency has on my psyche. Every time I upgraded the kernel or distro there was a significant chance the audio or video would stop working and require 15 minutes through 2 days to restore. It turned out frequently the easiest way to get it to work again was reinstall. The positive aspect of this masochistic computing experience is that when I launched a startup 8 years later I was quite comfortable filling the sysadmin role.
Interesting but can there be a project to get OpenCL over Vulkan so maybe there can be out of the Box/ISO support for OpenCL for Linux on AMD's Ryzen APUs for OpenCL compute without having to install the OpenCL parts from the AMD Linux Pro driver stack.
MESA appears to be All about OpenGL and Vulkan for Games but there's not much attention paid to OpenCL and GPGPU compute for Things Like Blender 3D's GPU Accelerated Cycles rendering, and other Applications like Dark Table, Gimp, etc. that need OpenCL for GPGPU/Compute acceleration workloads!
There really needs to be a Primer for Linux and How that OpenCL is plumbed into the system and what files are used to get that all working on Both Integrated and Discrete GPU based laptops! So What's the Linux equivalent of MS's WDDM and how does the Linux Kernel get Graphics and the Graphics APIs and Drivers working properly for User Space Applications to make use of.
I always thought about writing a GNU/POSIX utility called 'pp' (prepend), that would simply take an input file and insert it above the first line of a target file. So, essentially, if I have a CSV file without a header and want to quickly insert the header file, I would run:
But also support all of the expected POSIX niceties, reading from stdin, etc.
But I never got around to it, and it's been a few years since I had to use C at my day-job (which was never great to begin with), so it fell to the wayside.
As someone who's only exposure to non-windows systems is 20 minutes with the raspberry pi I picked up the other week, I think this will be a great resource for me to follow along with as I familiarize myself with everything. Thanks!
This kinda mirrors advice I would give my younger sibling when he was struggling to pick up programming, even though that is the direction he wanted his career to go.
He thought he needed to know everything and saw that as an impossible mountain to climb.
My advice to him was to find a small problem he cared about in the world or at home and develop a solution. When it's a problem you care about or want resolved, you are more invested and learning becomes fun. Then you can build on those skills with what ever you find to work on next.
> That thing ended up being a program that reverses the contents of a text file. Since this is just a reverse version of cat, I called the program recat.
Surprised that Linux From Scratch hasn’t been mentioned. Probably the best soup-to-nuts walkthrough out there for learning deeply about Linux. And I do agree with others that this is commendable- I remember my struggles with getting into *nix vividly. Keeping a beginner’s mindset and staying humble will take you far.
I've been sniffing around for projects to do to improve my coding ability and understanding of different programs/languages. Your article really motivated me to start some small projects of my own! I like the idea of not getting yourself down with giant projects and instead just making small applications of new technologies you want to learn.
For me out of university, it was a co-workers books (some of which I ended up buying) that help me understand the role of UNIX better (I used the alpha machines at university, but was thrown into HPUX/Solaris at work..)
One of my pastimes is randomly exploring the Unix Toolkit. It's basically, "What is this thing? What does it do? Do I have a use for it?" And even if the answer is "no", at least I now know it exists and where to find it should my needs change...
I’d like to echo some other comments and mention that this is the way to learn things like this. If I want to learn a new programming language, I write a program in it. I think the author’s Createservice script might be useful!
welcome to the monkey house. the community behind unix has always been my favorite aspect.
many, many years ago i met dennis ritchie at the computer literacy bookstore in san jose. in the Q&A he commented that when he realized a circuit board was essentially a small network, the entire design of unix came to him in a flash. unix is an i/o multiplexer with light security.
While the technical accomplishments and understanding are commendable, I think that the main takeaway for me is the excellent approach to learning. Keeping it simple and taking an open-minded approach to learning (as exemplified by his attitude toward systemd) is an approach that more people could stand to take. So often we complicate things in our heads to the point where they seem unknowable and so we assume that they are too difficult and we don't try. I have seen this in myself at times and I see it others as well.
I'll be sharing this post at work so that the people who I work with who don't really understand Linux so well can appreciate the approach that was taken and hopefully also glean something from the excellent write-up itself.