The difference is that you assume that the learning + industry experience + chance at a job later will make up for the not-being-paid. In this case, there is no potential for getting a foot in the door, no industry experience, and the 'learning' is questionable, particularly when you could use the same skills to make $25-50 an hour freelancing.
It's hard to say what it is to "understand" an operating system, though. Knowing how it works in a fairly broad sense, or even knowing how specific parts of the kernel work doesn't mean you completely grok the system.
I mean, understand in the broad sense of "stuff all OSes do": certainly. I realize that there is memory management and etc. and etc. that is going on, and I have a basic idea of how that works, at least on Windows. (I use Windows, Linux, and Mac regularly, and can reach each from where I'm sitting.)
I have a decent enough understanding of Windows system internals and how it actually handles memory allocation, permissions, etc., but not down to the lowest level, just sort of a vague idea of what system files do what.
I assume Linux and OSX handle things somewhat similarly aside from implementation details, but I don't really know.
I haven't had the time to poke around Linux kernel internals (I have a pretty good understanding of the system down to that level, though), and I doubt I'll ever mess with OSX internals beyond what is necessary to have it 'just work'.
I figure I will master what I need to get things to do what I want. I find it fascinating, and at some point I'll probably have a project which requires me to hack on the Linux kernel in some way, shape, or form (unless someone else already has). In the meantime, though, I remain knowledgeably ignorant (I know that I do not know), which I think will have to do for now.
BIOS-level stuff and all of that is black magic to me. :P
It's hard to say what is and isn't necessary just from a development standpoint. From a curiosity standpoint, assuming unlimited time, complete knowledge of a system is better than limited knowledge. From a practical standpoint, such knowledge may make you a 'better' coder, but it may also tie you more to one OS rather than another. It cuts both ways there, though, because it could be that by understanding what does and doesn't change from one system to another, writing portable code is much easier.
So do I care that I don't know? Yes. Do I think it's the end of the world and must be remedied immediately? No. I will have plenty of opportunity to remedy my desire to know more about OS internals if I ever take on a project that requires me to know them exceptionally well. At some point that will inevitably happen, considering my interest in them. In the meantime, I'm content to know that I do not know and that in an ideal future, someday I will know.
What I wonder about is what do they mean by "stealth programming"? I can think of just programming with white text on a white background, but that wouldn't serve any security related purpose.
From reading that, it's clear that the shellcode was obfuscated ('encrypting' it three times, though, would be unnecessary), but that's just a good way to muddle things up. Although from reading that it's obvious that it was a sophisticated attack in this day and age of cybercriminals who go for the easiest target available, nothing mentioned there hasn't been possible for almost any buffer overflow attack. Code obfuscation has been used for years for copy protection and to prevent static reverse engineering in general, and although nonstandard in exploitation, by no means unheard of. In my opinion a more impressive exploit would be one which used all printable ascii (which also is possible).
On a side note, some of the terms used are either misused or just wrong: although the payload may have been obfuscated, 'encryption' at least to me implies separate key/decryption schemes, which don't really work well from a shellcode point of view. You'd be better off using a static 'encryption' scheme like ROT13, but that seems more like obfuscation in this day and age, particularly since the code to deobfuscate it would have to be built in.
TL;DR: I think they throw around 'encryption' in places where it doesn't make sense to use it because it makes it sound scary, and it doesn't seem like any of the techniques used were 'new' or somehow more sophisticated then what was previously possible.
For simple IDS evasion, at least, so that you aren't throwing up flags: it could've been done to make forensics much harder.
Ok so they use what's called packers to not only obfuscate the malware code to bypass signature based A/V but also hide inside other binaries or Dll's to further evade heuristic defenses. Then a reverse encrypted tunnel for control of infected machine was routed over normal HTTPS also undetectable by IDS. It was to dynamic dns domains such as yahoo1.dyndns.org. Reverse meaning it connects back to the attacker to allow ssh like access to the compromised host via the trojan.
There are all extremely advanced (but known) evasion steps for a very targeted attack. It's rare to see all of them successfully used in one attack because of the complexity and skill required.
If you encrypted or otherwise obfuscated the payload of the attack, it would make log analysis (particularly on the network level) difficult, and may help get around things like an IDS. It'd also make some forms of forensics much harder.