I've found this very true, as I've moved over the years from very "high-level" programming (client/server apps, database-driven web sites) to "mid-level" programming (working on the Firefox browser) and then toward systems programming (working on Rust and Servo). The desire to stop dealing with masses of fragile dependencies keeps driving me lower in the stack. I really need to practice more assembly coding so I can continue in this direction...
You get some similar benefits if you are working on code where you have some control over the levels above or below, or if your project has any form of self-hosting. For example, Firefox, where most of the UI is rendered by Gecko and some of it even runs within a browser tab. If you're a web developer and you run into a rendering engine bug, you report it and then ship a workaround for three years as you wait for enough people to update. If you're a Firefox front-end developer, you can just fix the Gecko bug and ship the fix along with your feature.
Quick question, I've been stuck doing CRUD for way too long but have always had an interest in low level programming. Unfortunately the few times I have tried to give it a shot I always come back defeated to my CRUD world, it all seems very alien to me. What do you think is the best way for your typical CRUD/Web developer with no background in low level programming whatsoever to dive in to low level stuff? Any tips or links to resources you may have are very much welcome :)
Hi, I'm interested in learning low level programming too! I've been taking classes, reading tutorials about programming, and starting projects on my own. It really depends on what kind of projects you want to work on. Some of the major types of projects include compilers, OS-level projects (drivers, kernel, etc.), networking (server/client), and graphics. You'll probably want to learn C and/or Assembly, probably x86 but I've found ARM to be fun. (and Raspberry Pi is built on it!)
- Operating Systems: Definitely check out MIT's open courseware on xv6! It's an implementation of Unix version 6. The course includes a git project that you can clone and play with yourself, as well as some labs. Here's the link: http://pdos.csail.mit.edu/6.828/2012/index.html
OSDev wiki had some resources too: wiki.osdev.org
If you have a Raspberry Pi or are patient enough to try to emulate the hardware, here are some tutorials on ARM assembly for RasPi: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/
- Networking: Here, you'll probably want to look for tutorials on socket programming and the TCP/UDP protocol. People recommend Beej's tutorials, which are quite comprehensive and a good source of information, but I found some of his server code confusing and convoluted (deeply nested loops and if statements).
Here's a link to his site: http://beej.us/guide/bgnet/
Here's a easy to learn server/client tutorial: http://www.thegeekstuff.com/2011/12/c-socket-programming/
- Graphics: I have the least amount of exposure to this, but a good resource is at the website open.gl. Basically you'll want to learn about shader languages and how the graphics pipeline works.
If you run into a bug specific to Windows XP in a software application, you fix it for Windows XP with a workaround specific to WinXP, then you continue supporting it forever?
I'm not seeing the difference other than you probably run into that kind of breakage more frequently with the web.
Exactly - as in the original blog post by Yossi Kreinin, I'm talking about a difference in degree only, depending on just how many levels of dependencies and abstractions you're building on top of.
Your comment brings up a good point that writing portable code (cross-browser, cross-OS, across compilers, across processors, etc.) adds to the difficulty level because it requires you to target a higher-level abstraction (web standards, POSIX, ANSI C...) rather than a concrete implementation, while still dealing with implementation-specific bugs that leak through that abstraction.
The first time you open up the debugger and your v-table pointer is null (presuming you know what a v-table pointer is!) things start to be interesting.
Or there was that time I printf'd a 64bit number and a random stack variable was corrupted. That was a lot of fun.
Memory protection? Haha. No.
For that matter, my team just came across a bug in our own code a couple weeks ago, we were performing a DMA memcopy operation (for which you get to setup your own DMA descriptors yourself of course) over a bus and at the same time sending a command packet to the same peripheral that was being DMA'd to.
oops.
Expected things to be put into order for you? Nope. Not unless you implement your own queuing system. (Which we are doing for that particular component now.)
All in all it is a ton of fun though. I'm loving it. Having seen an entire system be built up from setting up a clock tree to init'ing peripherals to our very own dear while(1). (We actually moved away from while 1, async is where it is at baby! Callback hell in kilobytes of RAM! oooh yaaaah)
There is certainly nothing easy about it. I have quite some experience with low-level and embedded devices, but hardware has progressed to the point where a Cortex M3 is quickly becoming the jellybean baseline throwaway uC, and have you seen the datasheet of one of these things?
The amount of stuff they do is incredible, and everything is interconnected in unexpected and intricate ways.
I haven't messed around with x86 enough to know whether it is actually somehow easier than the ARM stuff, but I certainly agree with you that CPUs are damn complicated.
I loaded up the datasheet for the M3's when typing up this reply. 384 pages. Contrast that with the TI 74181 ALU datasheet from the days of old: 18 pages, most of which are just detailed diagrams of the distances between the pinouts. The logic diagrams fit on a single page. You can build a simple CPU using one of these machines in a few hours in your basement.
Hardware is only going to get more complicated. At what point does it become so complicated that no one person can reasonably understand how a computer works "under the hood", even from an abstract level?
In the company I used to work for we covered quite a range of development targets, from embedded micros up to very high level factory control systems, and I always found the most challenging stuff was "in the middle" - stuff at the systems level.
At the bottom, it really felt like programming a machine, and I found it all good fun, much like solving a puzzle (with occasional obscure headaches, mostly compiler related). At the top everything is pretty abstract, and there's more freedom to do conceptually interesting things (for example we were doing quite a bit of adaptive control type stuff, which would have been a nightmare to write with lower-level languages).
In the middle, however, it seemed to me to be largely just a complicated network of interacting conventions. Those systems are neither firmly grounded in "reality", because they are generally trying to abstract away those details, nor are they "theoretically pure", because they need to be efficient (though there is a lot of interesting stuff there). What that means is that you simply have to learn and understand all those human-defined conventions to know what you're doing, and that makes solving problems at that level more difficult, or perhaps rather it requires much more hard-won expertise (which I never got much of - I just bugged other people until they would help me out!).
Obviously, my views are coloured by my personal experience, so make of them what you will.
I get the point of the blog author and I partly agree with it; but there are different types of "hard" and "easy".
To produce much of value at the low-end you need a really comprehensive understanding of the technology at the level you are working at which has significant upfront learning (and likely just plain aptitude) costs that aren't really addressed too much here. Once you have that knowledge, then yes, you get far less surprises, but acquiring it in the first place is not at all trivial or "easy" (though it may seem like it if you're a geek that's been banging away with assembler for years... you've just forgotten how much effort you expended at that stage, probably because it felt fun to learn).
At the higher-end you can string together a bunch of frameworks and glue code you cut and pasted off Stack Overflow and get something that pretty much does what you want, most of the time, maybe, while barely understanding the underlying technology.
Which is "easier" or "harder" depends a lot on what you mean by those terms.
Also the assumption that "high-level" means HTML/CSS/JavaScript is not that useful for the overall debate since not all high-level development is as annoyingly unpredictable as HTML/CSS/JavaScript.
A beautiful elaboration of what I've been feeling for many years. I don't get all this excitement over web programming. The process is a pain in the neck -- why doesn't this color turn red when it's supposed to? Oh, this IE workaround conflicts with this Firefox bug. To me it feels like a house of cards held up with old, dessicated duct tape. Yeah, you can do amazing things with cards and duct tape, but because of the shaky foundation, it ends up being vastly more frustrating and so much more work than it really should be.
I haven't responded by going as far down the stack as this guy has -- although I've done a fair bit of assembly and enjoyed it, I spend most of my time in Python and Erlang -- but coding web apps can be such a ghetto.
If you go further down, at least on x86 you'll find a lot of duct tape. I guess most people who have written serious code in x86 assembly wonder how those machines can actually work so reliable in practice. Probably because kernel developers do a good job.
// my perspective: I started with low level c and assembly, mainly on x86 and do web programming nowadays.
30 years after it was inserted into the architecture to work around some software issues/bugs, bootloaders still have to fool around with it every time a x86 machine is booted. Lovely.
Well for one thing, the machines need to maintain 8086 compatibility even though it is horribly outdated. IIRC the actual processor just runs microcode that emulates an x86-compatible system or something to that effect.
It's sort of like gcc where layers keep getting put on top of layers and only like 5 wizards from MIT know how it actually works.
The word "emulation" is uncharitable. Breaking a high-level instruction into micro-operations allows the architecture to present a consistent ISA to programmers while optimizing execution for the micro-architecture. Yes, x86 originally did this out of necessity. But it's a little unfair to call it "duct tape". Case in point: ARM sometimes does the same thing, despite being a RISC.
In other words: design your clean, orthogonal RISC ISA however you want: at some point in the future, some processor designer is going to end up translating those instructions into something else.
As for layers upon layers of abstractions: welcome to computer science. ;)
Playing the piano is easier than playing a violin. I would argue that a world-class pianist is about as skilled as a world-class violinist.
If you could allocate skill points like an MMO, the violinist is spending 3 points on instrument mastery, and 7 points on musical mastery, while the pianist spends 1 and 9.
I hate spending my limited skill points on "browser mastery," so I mostly do lower level things.
It seems to me that musicians play more complicated things the easier the instrument is, so it does even out at the end. Drummers do some insane things, even though they only have five drums to hit (whereas a piano has 60ish keys).
I agree with your general point. So maybe the following is the exception that proves the rule, but:
An acoustic piano has 88 keys, and classical or jazz piano music tends use many of them. Whereas a 60 key electronic keyboard is often subject to the Flock of Seagulls treatment, wherein one key at a time is held down. :) It's roughly like touch typing vs. hunt and peck.
But aside from that example, yes, I think you're onto something.
I'd say more like 7 and 3 than 3 and 7. Still, the violin has always seemed strange to me. It doesn't help at all with the established 12-tone scale, or with polyphony or chords, and it sounds unbearable in a beginner's hands. Sadistic parents allocate about as much skill to it as to the piano, but it just seems like a bad instrument for producing western music.
The violin requires physical adaptations in order to progress past a certain level. You have to start developing those adaptations when you're very young. Otherwise you'll never be quite as fluid with it as someone who has. I suspect that there are no world-class violinists who started as adults.
The big difference, and the reason most find low level harder, is that you can do a pretty bad job on front end code and still produce something of value. Whereas low level code tends to need to be more solid.
I would not say that dependency of the layers is like a tree and if trunk breaks, everything does. I'd rather see the levels of programming abstractions as a graph with lower/higher level nodes.
As it is pointed out in the comments above, x86 has surprisingly much of a duct tape in it. There is a lot of much more saner platforms, but programming still lives happily on x86. Why? Because it does not matter at all so often. x86 is hidden from us with OS, compilers and virtual machines. That's where the beauty of programming shines: if you have a lame implementation below your level, just abstract it away as far as resources allow you (e.g. full-scale managed memory programming is theoretically possible in DOS but noone did it because of CPU/memory constraints). Another example: Win32 API and MFC and other native stuff is horrible, but .NET platform seems to be quite sane. Javascript is horrible, but jQuery is, um, beautiful.
On the other hand, a hardware guy still operates with an abstraction of a microcontroller. If memory is corrupted, his abstraction fails also. It's just that the HW guy does not have to invent good abstractions often (or rather at all).
The point is, abstracting may be self-healing: if something is broken at some level, a level above can use the good parts to build a fully functional emulation.
This is why I think higher-level programming is easier. I love to build an abstraction around a complication so that I don't have to think about it anymore. It was only very recently that I began to embrace what kind of programmer I am. I prefer high level abstractions; I prefer the mathematical underpinnings; I detest doing things manually. Other people have the opposite inclinations and I'm happy that the world has both.
I have recently had to work with C++ on some systems-level code. It's the lowest-level I've had to work on for a while. I find that having to think about things like memory management and system peculiarities like buffer alignments gets in the way of thinking about the higher level algorithm or problem. In this case I did not have the luxury of being able to build abstractions around some of the quirks I encountered.
On the other hand, when I work at a high level with a suitably high-level language, I can recognize patterns and abstract them away. For example, I've seen many instances of the pattern
b = a.f(); if (b != null) { c = b.g(); if (c != null) {...}}
This pattern is fairly common, and in a suitably rich high-level language I can abstract it and just write
c = a.f().g();
It saves me a lot of thinking and honestly feels a lot easier.
This seems pretty spot on to me. More moving parts = more failure points...as a web engineer, I find myself bitten more by the trappings of abstract code structure than by things like faulty algorithms. This ties pretty heavily into why I feel the standard technical interview for web engineering talent is severely broken, but I digress.
The author makes some valid points, but he seems indifferent to the most important question: why program computers at all? He doesn't seem to enjoy it much (although that may be more of a sarcastic tone). I think we need more devs who do it because they want to accomplish something particular, and less devs who are doing it cause computers are kewl (or to get paid for that matter). With a utility view, the choice of low-high level gets less emotional:
* high-level is more productive in the short run, but may hinder you in the long run
* mid-level (C-ish level) tend to be more portable and more future proof
* low-level will probably give you better resource usage (speed, memory etc), but not necessarily these days
Personally, I find C++ (plus open source libraries) gives the best trade-off, but that's probably dependent on the task (I do audio/video analysis/synthesis).
Most of my enjoyment of programming comes from the coolness of math, logic, and simulation. Most of my hatred of programming comes from the brittle, messy, confusing, uneven frameworks laid down before me by other programmers. My career consists of maybe 10-20% of the former. The rest is the latter. Consequently, I've really hated, and I mean hated, doing this for a job.
Another way to look at it is, just as in all other spheres, I enjoy the abstract intellectual part and hate the human part.
Out of interest why programming languages / frameworks do you use?
I can relate to your description, as it seems to be similar to my experience between back end development using Django / Python, which I found well put together and well thought out.
Compare that to front end Javascript frameworks, and (it may just be my lack of experience in JS) everything seems far more fragile, poorly documented, incomplete.
Sure, there's an ubersmooth learning curve with HTML/CSS/Javascript, but once you get to heavy client side SPAs and supporting every device and format under the sun and then someone say "it's not good enough, try and make it feel like a 100% native iOS app - and no, you can't just write an iOS app" ...and good luck hiring a good front-end guy (at least if you don't do the expensive way of "hire 5 instead and fire 4 of them after two months" route).
I agree with this article. The Web is a shambling mess of crap technologies, just a Jenga tower of crap. When is it going to collapse under its own weight? The rise of native apps suggests that the collapse has already begun.
This seems like a kind of selection bias to me. For the most part we only build "high level" things on top of "low level" things when we're successful in finding a use for them, by definition. The more successful you are, the easier success seems. Put another way, if they weren't "easy" then there wouldn't be a "high level" above them to make the concept of "low level" meaningful or concrete.
Also, as a caution against asserting low level means "easy", I will take this opportunity to drop one of Murphy's Laws:
"An easily understood, workable falsehood is more useful than a complex, incomprehensible truth."
sometimes written as
"All abstractions are wrong, some abstractions are useful."
As an example of a low level abstraction that is both useful and wrong, consider the libc strtod() function, which converts a decimal string to a native floating point representation. If I were to give you the pre-parsed integer significand and integer exponent, base 10, then you'll find there's no mechanism in the C standard library to convert those two integers to the correct double value, despite strtod() having to do the very same thing, at some point after the parsing stage. If all you ever want to do are string to double conversions then this function will always have appeared quite low level, but the reality is that this is only the case because it's always been so damned useful.
Low level things are intrinsically useful and the more useful something is the less wrong it seems.
> Put another way, if they weren't "easy" then there wouldn't be a "high level" above them to make the concept of "low level" meaningful or concrete.
Yeah, the higher level is things that are also rejected at large by the industry and moribond in research like DSL, code generation, modelisation, algebra stuff and proved programs. Things that exists more or less at every level but widespread in CPU making.
I suppose if you just want to program because you like to program, then this article makes sense. If you want to program because you want to make stuff non-programmers can use, then you are more interested in productivity and distributability and are likely to choose higher level languages like JavaScript.
I've been thinking about this a bit lately. I'm not a low-level developer in the slightest, but the question I've been pondering is, when you get the low-level stuff, is it a lot more focused on simple I/O? The complexity (as I understand it) comes from understanding the register, address, hexadecimal stuff (which I don't really understand).
If I understand correctly, the OP is basically saying the same thing. The complexity in the higher-levels comes in when we try to create an API further up the stack which is responsible for manipulating the more understandable data into something that the device understands at a low level. The difference in retrieving a file stream vs getting the file in standard utf-8 format, which anybody can read.
What are your thoughts on this. Have I got it right? Or am I on the right track?
The conundrum is that the high-level customer-facing stuff wouldn't be possible without the low-level behind-the-scenes work that was done on protocols and kernels.
If you look at the author's actual arguments you realize most of them rely on other people. I don't know about others, but I include many more bugs in my code when I write low level code as opposed to high level code. Having to deal with the chance that someone else included a bug below me, I think, is worth it if I don't have to worry about anywhere near as many bugs in my code.
I'm sure making the assembly instruction set was not an easy task. The instructions are small, but the simplicity and the beauty of keeping it simple is in itself a great accomplishment.
How floating points are represented in computers is such a neat hack, yet it can be done with a simple formula on pen and paper.
Doesn't mean you can come up with.
Everyone knows E= mc^2. It's easy, But you didn't come up with it.
As a young web engineer who have had the pleasure of dealing with what seems like 30 different versions of RSS-feeds, which also appear to be evolving in random directions like living things, I can confirm this. (I've also messed around in C, and even Assembly at one point.)
The higher you go, more abstractions, more specs, standards and frameworks. Its actually more complex, but once you understand "all" of it is not that hard. Lower level is easier to get started but the more complex problem the more difficult to solve.
This is overly simplistic. Sure, low level programming can be easy if you understand the basics of what you are doing, and the same is true for high level programming.
However, there are aspects of low level programming which are far more technical than anything you are likely to run into programming UI's or the like. Thread scheduling, OS development, compiler development, standard library stuff, etc. tend to be quite challenging from a technical perspective (I've done all but one of them.)
The author is picking one type of low level development and painting the rest with the same brush. Low level development occurs on more complicated architectures as well.
Of course, high level development can be challenging as well, but often in a different way. The challenge here lies in understanding the quirks of your libraries, creating a good user experience (terribly hard at times, but not often technically challenging), working around oddities of your platform, etc.
>And it sucks when you change a variable and then the other processor decides to write back its outdated cache line, overwriting your update.
Well... that's why you use memory fences (volatile if your language supports it) even for writes on types which would be atomic.
Low-level programming requires a great deal of knowledge, in exchange for which you get an enormous amount of control over how your efforts behave. High-level programming offers the prospect of a quick start, but, especially with today's web frameworks, you tend to rapidly find yourself bogged down in dealing with magic that's baked into the platform and which nobody really bothered documenting very well because "it just works" -- which is fine, except when it doesn't, and you get to spend time working on your tools, instead of the problem you're using them to solve. I don't want to RAILS name any names here RAILS, but RAILS I'm sure every RAILS web developer has RAILS at least RAILS some RAILS inkling of RAILS what I'm RAILS talking RAILS about RAILS here RAILS RAILS RAILS.
My observation of the programmers with whom I've worked, and whom I know socially, has been that a given hacker's degree of intelligence and capability tends to be inversely proportional to the thickness of the stack on which he chooses to base his efforts; the smartest person I know, who has multiple doctorates and is currently busy breaking new ground in bioinformatics, doesn't even use libraries if he can help it, and complains constantly about the frustrating misbehaviors of those which turn out to be irreducibly necessary. Perhaps I've merely been observing a constellation of coincidence, but on the whole I rather doubt it.
(P.S. for the benefit of anyone who feels like I've just called him stupid: my last gig was a Rails job, and if you took all the programmers I've ever known and broke their smarts up into quintiles, I'd be somewhere in the upper second, maybe the lower third. It doesn't take genius to recognize genius at work; I've merely been uncommonly fortunate, for the most part, in my choice of friends and colleagues.)
(P.P.S. for general use: Speaking of recognizing genius at work, it scintillates in every jot and tittle of the article here under discussion.)
Sounds like your friend with multiple doctorates has put himself in situations where he has time to not use libraries, rather than the necessity of shipping everything asap.
Spoken like someone who's never had a particular software stack and methodology forced on him, and then been hamstrung by it. The genius of the friend I mentioned is evident, as much as anywhere else, in the way he has placed himself beyond the reach of the sort of mediocre-to-incompetent management under whose yoke web developers so often find themselves struggling.
Yep, sounds like most of the bioinformatics I know. If there is a wheel to be reinvented, do it. Who cares about mature tested code when you can write it yourself. Guess he is smarter enough not to need that.
You get some similar benefits if you are working on code where you have some control over the levels above or below, or if your project has any form of self-hosting. For example, Firefox, where most of the UI is rendered by Gecko and some of it even runs within a browser tab. If you're a web developer and you run into a rendering engine bug, you report it and then ship a workaround for three years as you wait for enough people to update. If you're a Firefox front-end developer, you can just fix the Gecko bug and ship the fix along with your feature.