Hacker News new | past | comments | ask | show | jobs | submit login

And maybe the only person to ever successfully troll Linus Torvalds and then get an apology from him (ok just an excuse to link the epic debate in [1]).

Thanks Mr. Tanenbaum, your various works have been a huge inspiration as well as incredibly interesting to read or tinker with.

[1]: https://groups.google.com/forum/?fromgroups=#!topic/comp.os....




> Linus "my first, and hopefully last flamefest" Torvalds

Of everything hypothesized-for-the-future in the thread, this turned out to be the least accurate prediction of them all :)


"Don`t get me wrong, I am not unhappy with LINUX. It will get all the people who want to turn MINIX in BSD UNIX off my back. But in all honesty, I would suggest that people who want a MODERN "free" OS look around for a microkernel-based, portable OS, like maybe GNU or something like that."

Hehehe.


It's easy to be smug now knowing how things turned out, but lots of really brilliant people agreed with Professor Tanenbaum at the time.

The difficulties that microkernel projects ended up encountering were not easy to forecast and ended up taking virtually everyone by suprise.

It was in the spirit of progress towards better ways of architecture software that Tanenbaum and Stallman (as well as many others) chose to try a new architecture rather than just build yet another monolithic OS kernel. Being on the pointy end of technology means you often end up being the one to discover what doesn't work.


What's even more amusing about that remark is that - as of MINIX3, at least - MINIX has in fact adopted the NetBSD userland.


He is not talking about the userland, he's talking about the kernel structure, mirco vs. monolithic.


Ya...part of the design goal is that a microkernel as envisioned by AST should be able to able to have interchangeable userlands, including multiple different userlands running at the same time. So in that sense, AST was spot on.


Oh, and:

- device drivers as processes (so you can actually debug them)

- increased security by isolating various parts of the core OS from each other

- easy scaling from single machine to cluster by message passing

- treating devices as networked resources

- file systems in userspace (which we have now with FUSE)

and so on.

The benefits of microkernels go a lot further than just being able to use multiple userlands.


My point is that writing a new operating system that is closely tied to any particular piece of hardware, especially a weird one like the Intel line, is basically wrong. An OS itself should be easily portable to new hardware platforms. When OS/360 was written in assembler for the IBM 360 25 years ago, they probably could be excused. When MS-DOS was written specifically for the 8088 ten years ago, this was less than brilliant, as IBM and Microsoft now only too painfully realize. Writing a new OS only for the 386 in 1991 gets you your second 'F' for this term. But if you do real well on the final exam, you can still pass the course

Haha :)


This just reinforces the fact that you shouldn't just blindly trust what your professors tell you ;-).


Au contraire, Tanenbaum was 100% right. And porting linux to the first new architectures turned up a ton of hard-to-fix dependencies on x86, fortunately the UNIX system call interfaces were all copied from more mature instances and lived on unaffected.


Just because you can succeed despite ignoring your professor's advice doesn't mean the advice wasn't sound.


Linux started x86 only, but it was soon ported to other architectures. I was running it on SPARC just a couple years later.


NeXT realized the problems and pains, and hence the portability of XNU kernel.


Fascinating read! Although I couldn't help but imagine that discussion happening today... in 140 character tweets.


Actually, Linus Torvalds is on GitHub and Google+. And he still actively gets into 'flame'-fests in social media: https://github.com/torvalds/linux/pull/17#issuecomment-56613...


I know this will shock many node.js developers, but the Linux kernel is still managed with a mailing list.


> Thanks Mr. Tanenbaum

It should probably be Prof. Tanenbaum or at least Dr Tanenbaum.


Heh, I can explain that. :) I wrote Prof. first but then thought it sounded like I had actually attended his classes, so I removed it. Then, in spanish there is no different wording for PhD vs MD, so we normally only use "doctor" as a title for physicians, or in extremely formal circumstances for other academics. I have always known him as "Andrew Tanenbaum" since the late 80s, so I thought Mr. would be a proper form to show respect.

Thanks for your comment!


He will probably appreciate it either way. But the point is well taken.


"I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)"

Since every microkernel in use (Windows NT and XNU, really) have taken much of their code monolithic, I think this is the true legacy of Tanenbaum's career.

Flamebait at its' worst, but acceptable because he was a respected member of the academic elite.


His legacy, just from MINIX and his books alone, more than outweigh whatever shortsightedness he might have had back then.

Also, it was Usenet. Flame wars were its bread and butter.


If you think that this is the bulk of AST's legacy, or that real microkernels aren't in general use, you really should get out and learn a bit more about computer science.


This could have so easily been phrased much more nicely and more substantively.

What is Tanenbaum's legacy? What microkernels are in use?


QNX?

And in all honesty, Linux has adopted a number of elements over the years that would not have flown with the original fully monolithic kernels.

No, we don't have userspace drivers. (Apart from proprietary GPU drivers and a number of enterprise hardware systes .. and at least Canon printer drivers, and and and and ...) They're not isolated in the sense that microkernels would enforce but they provide a shim for the kernel and then do a lot of their processing in userland.

We can load and unload drivers at runtime. That's what insmod/rmmod do.

We don't do message passing between kernel components, but in order to prevent messaging from becoming a bottleneck we have signaling mechanisms, we have netlink, and we have $deity knows what else.

We DO have userspace filesystems: FUSE. I'm still waiting for userspace block device drivers but that's probably not going to happen. :)

What do we have with Linux then? Not a microkernel - just a very modular and runtime-modifiable mostly monolithic kernel instead. A hybrid? Ish?


I think you have to straight up call Linux a monolithic kernel. It's got some nifty modularization features, but none of them (other than maybe FUSE) are really what the microkernel folks are going for. And that's fine. Both approaches have obviously evolved to fill niches the other doesn't satisfy as well. What's clear is that nether approach is a panacea for all compute needs.


He's written several excellent books on operating system design and they are considered by many to be excellent resources - he's taught a lot of people about how an OS works beyond the decision of microkernel vs. monolithic. He's a respected professor, seen as the authority on the topic, his less-popular opinion on microkernels notwithstanding. Most OS's today may have gone in the direction of being monolithic, it is true, but as other commenters have pointed out, Linux is actually very dynamic and modular, Minix 3 has some very advanced functionality, and this all validates the vision he had for microkernels, even if it didn't pan out to be commercially optimal.


Xen, Minix 3, QNX, several l4 implementations, etc. OKL4 alone has been shipped on over 1.5 billion devices. QNX was huge. Xen is running everything at AWS, the largest webhosting company in existence (nevermind everywhere else it is used).


QNX is huge. If you took QNX out of the world it would stop to function just about immediately. So much stuff runs on it that you typically wouldn't even guess. In that sense it is just like other soft real time/hard real time OS's one of the unsung success stories of IT simply because it works so well it tends to disappear.

Most things that use QNX or other OS's like that inside simply work rather than that they require constant upgrades and bug fixes. Reliability by design is so much better than reliability by trial and error, and the OS is a huge factor in that. It's a world where bugs are felt as 'egg on your face' rather than a 'wontfix'.


No...based on your reaction I think I phrased it pretty much spot on.

I added a new question to my interview process a number of years ago. Anyone who put "computer science" on their resume get's asked "What do you think of Andrew Tanenbaum?" It's free-form question intended to see what about computer science interested them enough to remember; I have similar questions about Turing & Knuth. Our team requires a lot of broad knowledge and original thought.

If they don't know who AST is, the interview is probably over. We continue if they can discuss pretty much any of the 6-7 seminal, award winning CS textbooks he wrote, the other major projects he lead like the Amsterdam Compiler Kit or the Amoeba distributed operating system, or the other contributions he made to networking, distributed systems, languages, computer architecture, operating systems or information security (he did publish nearly 200 journal papers over 43 years as a professor). If they know about Electoral-vote.com, bonus points.

If all they can come up with is "Minix" and "that pissing contest with Linus", then I might see if the Linux devops guys have an opening. If they're that incurious, they'll do fine there; those guys think the world begins and ends with Linux, too.

Continuing in the "let me Google that for you" vein, both QNX and various L4 family microkernels are in use in a variety of embedded systems; QNX is also in the new Blackberry products. There's a number of very mature security oriented research microkernels (like L4se and K42) that could very well show up in commercial products eventually. But that's back to needing to know more about computer science than Windows and MacOS.


> If all they can come up with is "Minix" and "that pissing contest with Linus", then I might see if the Linux devops guys have an opening. If they're that incurious, they'll do fine there; those guys think the world begins and ends with Linux, too.

Do you actually believe that someone is incurious simply because they don't share your own interest in Tanenbaum? Perhaps they've focused their curiosity on one of the many other luminaries in CS, or perhaps they're more interested in the topics themselves than the personalities behind them.

Your contempt for your own devops team is also disquieting. Based on your comment your company sounds like a toxic place to work.


That's why we ask the question about 3 people in broadly different areas. Frankly, if you don't recognize at least one of those names and understand the foundational contributions they made, then yeah I'd call that a kind of incurious.

As to devops, you may think whatever you want. I give them shit about the "all the worlds Linux" attitude, they give me shit about "fucking research projects" (e.g. anything that isn't Linux). We understand our respective views, and it works.


> That's why we ask the question about 3 people in broadly different areas. Frankly, if you don't recognize at least one of those names and understand the foundational contributions they made, then yeah I'd call that a kind of incurious.

Well that's a lot more reasonable! Your original comment left no ambiguity that candidates insufficiently knowledgeable about Tanenbaum would be shunted over to devops.

> As to devops, you may think whatever you want. I give them shit about the "all the worlds Linux" attitude, they give me shit about "fucking research projects" (e.g. anything that isn't Linux). We understand our respective views, and it works.

That could be the basis of some good-natured ribbing, which would be OK. What's not OK for a healthy company culture is the suggestion that devops people are inherently incurious, and the strong whiff of intellectual elitism which came across in your original comment.


I'm totally with you that questioning Tannenbaum's legacy is pretty poor form, but your interview questions sound designed to filter out anyone who doesn't share your exact interests, which is a real shame. A better follow-up than ending the interview upon a candidate not knowing who he is would be to describe his achievements (as you did here) and then ask the candidate to tell you what they know about someone else interesting who you may or may not already know all about.


This has nothing to do with cultural bias. It's just basic CS stuff that anyone with "CS" in their resumes should know about. Heck, even undergraduate students will probably have "AST" tattooed inside their brains in the first semester alone.

I'm sorry to be picking on you but this is one of the things that is absolutely wrong in our field: we don't learn anything from history. We don't know what was being researched in the 70's and proceed to reinvent the wheel over and over thinking we somehow have magical brains that are unearthing some concepts for the first time in human history.

The traditional CS curriculum should adopt a mentality of "ok, you now understand at which point in history we are in CS? Know most of the past inventions? Fine, now proceed to build on top of them and stop wasting everybody's time with your rediscoveries".


I don't think you're picking on me, and I wasn't trying to suggest that such a question is cultural bias. What I meant is just that people in general tend to think the things they know about are the most interesting things, and that people who don't know about those things are deficient. But by definition they can't know about things they don't know about, which may be just as interesting. So my proposed replacement question just acknowledges and tries to work around that phenomenon. I doubt it is actually critical for people to be super familiar with Tannenbaum's work specifically, it is just an indirect way of assessing intellectual curiosity and CS chops, which I think my question would also achieve.

I pretty much agree with everything else you said, and I wish I knew more about the history of computing myself, since I've lost a lot of my memory of my college course on it to the sands of time. I wonder if there's a good survey book. Maybe AST wrote one...


Another good interview technique is "did you actually read what I said". Things like when I said it's an open ended question, asked about 3 of the seminal minds in CS at least one of which a CS graduate would have bumped up against, and that the candidate can focus on whatever they want to. Just a tip for your next interview.


> If they don't know who AST is, the interview is probably over. We continue if they can discuss pretty much any of the 6-7 seminal, award winning CS textbooks he wrote, the other major projects he lead like the Amsterdam Compiler Kit or the Amoeba distributed operating system, or the other contributions he made to networking, distributed systems, languages, computer architecture, operating systems or information security (he did publish nearly 200 journal papers over 43 years as a professor).


> Continuing in the "let me Google that for you" vein, both QNX and various L4 family microkernels are in use in a variety of embedded systems; QNX is also in the new Blackberry products. There's a number of very mature security oriented research microkernels (like L4se and K42) that could very well show up in commercial products eventually. But that's back to needing to know more about computer science than Windows and MacOS.

Let's be fair here. Your claim was that microkernels are "in general use". Ongoing research, however mature, does not support this claim. And Blackberry is hardly the heavy hitter they used to be. Meanwhile, the major OSs for computers as computers -- and as phones -- have backed away from the microkernel design. Maybe they shouldn't have; regardless, they did.

That leaves embedded systems. And there you have a point. So: microkernels are in common use in embedded systems. But let's not overstate their successes.


Let's be fair here, embedded systems are computers like every other and there are far more of them than there are regular computers. If you walk into any slightly larger production plant and you take QNX and other soft real time or hard real time controllers out then that plant becomes so much scrap metal.

Besides, QNX works very well on PC hardware and is used extensively in the communications industry.

Please do not take your own limited exposure to the world of IT as proof that certain things are true, especially when they are emphatically not. I know of several thousand QNX installs within 10 km from where I'm sitting.

Denying the success of micro kernels such as QNX by disqualifying applications is like claiming linux is a failure by excluding mobile devices.


Let's be fair here. If you get to redefine "computer" to exclude embedded systems, even though they vastly outnumber "computers as computers" (whatever that really means), then you get to be right. But really all that does is show your limited view of the the field. For example, QNX runs 10s of millions of cars alone and who knows how many Cisco routers running IOS-XR. If that is "overstating" success, I'm not really sure what success looks like.


> Let's be fair here. If you get to redefine "computer" to exclude embedded systems, even though they vastly outnumber "computers as computers" (whatever that really means), then you get to be right.

Actually, I don't think I disagreed with you, except to note that a research kernel that might be used someday, does not count as "general use".

You might count the insinuation of overstatement as a disagreement. The point of that is that context matters. When I talk about choice of OS, the Mac in my living room has rather more weight in my mind than the embedded controller in my garage; I know I'm not alone in this. So if we say only that microkernels are heavily used, then we are correct, but we will be misunderstood. It is better, I think, to make a statement that is both correct and understandable, than to make one that is merely correct, while looking down on those who misunderstand.

EDIT: A quote[1] from me, giving an example from a rather different topic:

> If I open up a restaurant that serves General Tso's chicken and chop suey and sweet and sour pork and fortune cookies, and I advertise that I serve "American food", then my description is accurate, but my customers will be confused.

[1] http://www.reddit.com/r/todayilearned/comments/25dji0/til_ge...


> When I talk about choice of OS, the Mac in my living room has rather more weight in my mind than the embedded controller in my garage

This is pretty much exactly what I'm selecting against. It's not that your concept of "general use" in computer science excludes embedded systems (frightening, considering you're apparently teaching this stuff). It's that when more than one poster tells you that you're wrong, and provide concrete examples of why, your response isn't "that's something I need to consider" or "perhaps my knowledge of the field isn't what I thought it was" or best "I've got more to learn". Nope, you decide the "context" of the discussion is whatever you want it to be and to trot out a contrived bit of sophistry which boils down to "I might be wrong, and I'm not saying I am, but because lots of other people would be wrong, I get to be right". Or something. Doesn't matter. We weren't opening a restaurant.

Double bonus points for focusing on a throwaway, tangential comment and pretending it's a central flaw of argument. This clearly isn't your first specious Internet argument.


Well, then, I suppose your comments are something I need to consider.


You sound like a terrible interviewer.


What do you think of Heinz von Foerster?


Or Niclas Luhmann, Maturana and Varela?


Yes...I think we can all agree that I've committed an unpardonable sin by only including 3 luminaries. There's only so much time in an interview.


Please tell me where you work so I don't apply. It's just that every time I think about Tannenbaum I'll remember your posts. For me, that's not good.


I don't think either of us need worry about that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: