I literally wrote a book on Meteor.js and still have to check the manual for stuff all the time. The fact that engineering interviews are essentially roast sessions / pop quizzes is insane.
I interviewed with Snapchat a few years ago and was asked to write a Sudoku solver. Some context: I had never played Sudoku in my life, lol. I came up with a naive (unoptimized, unrefined) solution and didn't get past their third round.
I find this interesting because grade inflation is real. There are so many straight A students that it loses it's value as a measure. But if you can do something without looking it up I would agree that is proficient.
A good team member is a larger force multiplier than a single rockstar.
I realized one day why I behave in ways like this. It's from working in a clear-cut environment that I treat normal communication literally. Someone once commented that I 'never lie' which I understood was much more true than false but ultimately still false.
Similarly, if I see more than one or two 9-10 on self-assessed skills, I feel that's an invitation to "sock it to me" on knowledge questions when interviewing someone. When people are 4-6 I'm a lot more reasonable.
> When I tell to managers that I’m good at documenting, they always say: great, we need better documentation! But then, one of the following may happen:
I relate so much it hurts. The industry at large seems unable to see the lost opportunity of significant investment in technical documentation.
Dang, while I'll appreciate your one compiled and one interpreted language way of life (pragmatic!), this makes me sad:
> Swift: I’m not an Apple person.
I'll admit, if I wasn't a professional iOS code slinger I probably never would have approached it, but at this point it's by far my favorite language to write (especially prototype) in.
I'm always interested when it comes up, especially when I see info on its progress for usability on non-Apple systems, but the last time I remember seeing info on that (multiple months ago, admittedly) was that it was coming along, but there wasn't quite parity with the Apple ecosystem and some (core?) libraries were different/not as good?
The impression I got (from comments of here trying to use it) was sort of an early Mono .NET type of situation (maybe not that forked, but still). It's hard to put any effort behind learning a language when it feels like you would be a second class user. I spent way too much time and effort being the guy trying out the experimental Linux support for projects inthe late 90's early 2000's, and I have less time now, so my explorations need to be a bit more directed.
It came down to Swift or Rust for learning a new language a while back for me, and I picked Rust for the reasons above. I've yet to do anything with it (and I'll have to brush up on it yet again when I do), but at least I don't feel like I'm getting second class support from the language.
I wouldn't mind being mistaken or having this already been addressed and Swift's port to other systems is at parity with Apple's. That would be nice. I wouldn't mind spending some extra time on it then, it does look to be an interesting language.
I am a big fan of Protocol-oriented development. It allows you to do component-style programming, where it's very simple (swifty) to extend functionality onto struct/classes in a easy to understand, reusable way.
Additionally, language features like guard/optionals/etc... allow you to deal with error states & control flow easily.
Two good videos I'd recommend on protocol-oriented development:
I wrote C++ in college before I started with iOS/Obj-C around 2010, and 75% of my work from then until Swift launched was Obj-C, the rest was Java. So, just from a readability and writing standpoint, verbosity (or lack there of) was a huge win, however that's a bit subjective and more of an aesthetic reason.
1. I really like the let/var system, combined with the let/guard conditional assignments. It might not work for everyone, but I write cleaner code because of it. I also love optional chaining, and I've really molded my thinking around it, as it often encapsulates large chunks of logic in a single line (essentially, do we go past here or not, based on the state of the data and whether things are set/valid). It's null checking that doesn't feel bolted on.
2. Swift has made some big changes from release to release, sometimes breaking existing code, but my largest codebase is ~70k lines, and it's taken me at most a few hours to get rolling again (FWIW, the auto updater did not work for me on 2.2->3 I believe it was). That said, the changes are worthwhile. JSON (De)/Serialization built in via the Codable protocol was a big upgrade for me, removing a vast amount of boilerplate, as well as my reliance on a 3rd party library (although big thanks to NerdRanch for FreddyJSON, it served me well).
3. Speaking of 3rd party libraries, CocoaPods has treated me well. Easy to use, not too difficult to create your own custom libraries and manage them from your own git repos.
4. I know I don't use them to their full potential, but the higher order functions:
are a real game changer. Those operations, combined with my own drive over the last ~5 years or so to write more tightly coupled, functional, code has resulted in far more maintainable, easy-for-humans-to-parse systems.
Granted, it's not all daisies and roses. I hate how it handles strings, and they can't seem to settle on an internal representation/manipulation mechanism. The safety of the whole ecosystem makes working with raw byte representations/pointers a bit of a hassle when you need to do it, but it isn't terrible/impossible.
I'm by no means an expert, and just by the nature of my work and my responsibilities (especially in other domains) I don't feel that I've had the chance to truly dig into the language for all it's worth. For instance, when I watched this video:
My mind was blown, and I didn't realize just how much I was under-leveraging the type system, and I hope to have some time to do a few personal projects to really integrate some of the more core pieces of the language into my workflow soon.
This is already huge and ranty, so if you have any pointed questions I'd be happy to take a stab.
Interesting. I have to admit I have stereotype and prejudice, any westerner married to Chinese woman is pro-China. I see you have some sensitive Chinese word in username in different community, so I thought it is a Chinese guy in Western country behind it.
When your wife does Falun Gong, and her father spent a decade + in semi-prison during the cultural revolution, that tends to not bring out the brightest side of the Chinese Communist Party ;-)
I feel like this material is good for reference, but not so much for learning (especially if starting from zero with your first OS). I applaud the effort though.
I'll drop a question since I see Ciro Santilli is around. I want to start learning low level programming. Yet, I feel awfully overwhelmed every single time I try. Any recommendations you might have?
If you can choose still, study something that is related to laboratory work rather than programming, because anyone can buy a laptop and learn to program, but almost no one can access a lab.
If you can only program, first choose an application that you are passionate about, that is new, hard and important, and then learn whatever you need to achieve that goal. Application first, method later.
After that, if you still want to learn low level programming... :-) use emulators + minimal examples like in my tutorials, read / step debug the source of major projects and document that somewhere, and then try to modify those major projects with useful extensions.
Start with the book "Computer Systems: A Programmer's perspective by Bryant and O'Hallaron". Become fluent in C and its toolchains and you will be able to program everything (learn assembly only as needed).
Do you have any hello-world examples of how to load a 64bit ELF kernel with a GRUB2 bootloader? I don't mean Linux kernel, I literally mean a simple program that only prints "hello world". Something like this [1] but in long mode.
Using BIOS calls doesn't seem to be really "bare metal". One could use bios interrupt to read data from a disk, or also one could use memory mapping with a target device to write a device driver that could request the file. I find the second case a "bare metal" approach.
Fortunately most of the bios examples are using bios_* filenames. Rest of the files are very nice.
QEMU runs the 'seabios' bios image in the guest, so you could certainly run a 'bare metal' custom image instead of the default BIOS blob if you wanted. At the basic level of "prod the UART, prod the timers" we should be a reasonable match to real hardware. Running on real hardware without the BIOS would be trickier as you start to need to do things like set up memory controllers, which you can get away without on QEMU.
One could follow this "toward the bare metal" way as long as he wish. E.g. in Silego GreenPAK chips a CPU is considered unneeded abstraction and you program the raw state machine.
This repo is mostly just educative as a helper to understand Linux kernel x86 / Linux kernel drivers
In e.g. ARM, baremetal is potentially more useful due to embedded. But even in ARM you should just use Linux kernel / some RTOS if you can get away with it :-)
The use case is to break some layers of abstraction. If no one would write such a guide, who would create the next OS? We would all be dammed to copy existing codes and pray they work.
I applaude the author for the effort, in particular using images that can be booted should make it feasible to use these techniques for teaching!!
A dream I've had for a long time is to make an application that you boot into, which is the only application on the computer, for maximum optimization. Upgrades would be easy, just reboot the computer and load the new software. Wouln't write everything in assembly though.
The problem comes when you need to use any kind of hardware, all the drivers would have to be part of your app. The "unikernel" concept however does some version of this: loads a kernel which only supports the minimal amount of functionality and drivers needed and with only a single address space for a single app.
Historically this was more tractable because driver code was much simpler and there was a lot of cloning/emulation of "legacy" hardware interfaces (a lot of this is still present on PC, despite vendors' desire to get rid of it). Typically either the hardware capabilities were fundamentally narrower in scope than modern hardware or the hardware had its own controller (sometimes with more raw power than the host CPU...) that implemented an abstract interface. A bare-metal VGA driver for a single mode is no more than a few dozen lines of code, whereas a modern GPU driver stack literally has an optimizing compiler built into it.
This is standard "embedded" system programming for lower-end MCUs. "Unikernels" use the same concept to create standalone bootable binaries(linked with reqd. OS modules) to run directly either on HW or on a VM Hypervisor.
This seems like the most honest self-assessment I have seen of one's skills.