Hacker News new | past | comments | ask | show | jobs | submit login
Goto and the folly of dogma (2018) (manybutfinite.com)
69 points by luu on May 23, 2019 | hide | past | favorite | 63 comments



Over my first five years of professional programming, I've been thirstily chasing the dragon of "perfect description". Early on I thought it was OOP. Then entity/component. Then FP. Then it was really about the type system.

Possibly the biggest lesson I've learned - both from the kiln of real-world project requirements (within a multi-paradigm language) and from my intentional ventures into other different and diverse programming languages and frameworks - is that there's no such thing. There's no perfect way of describing, not within a domain and certainly not across them. It's not just a matter of abstracting farther and farther from real-world concerns, sacrificing optimization until you're in descriptive nirvana. There are many good ways to describe a given thing in code, and there are many more bad ways, but there's no perfect way. Once I grasped that I became a much better (and less stressed) programmer.


Yes definitely. The essence is finding the right abstraction. The computer doesn't care if you get this wrong, and the code could work perfectly, but it can be a pain to maintain something if something is abstracted the wrong way. And aiming to reduce the file size of your source files by "Don't Repeat Yourself" isn't the necessarily always the best way to make code maintainable. I've breathed a sigh of relief when I saw a code base that was your usual scaffolded MVC app rather than something with a tonne of metaprogramming. I've seen both, and the Keep It Simple principle has some merit.

Infact the best abstraction may depend on the team who will be maintaining that code - so whether to use Tech A or B or Pattern X or Y might have, as an important factor, whether you are moving office from one city to another, and whether the job market is good or bad, affecting flow of people in or out of the company etc.


I feel like engraving this paragraph in a wall:

Taboos tend to accrete over time. For example, overzealous object-oriented design has produced a lot of lasagna code (too many layers) and a tendency towards overly complex designs. Chasing semantic markup purity, we sometimes resorted to hideous and even unreliable CSS hacks when much simpler solutions were available in HTML. Now, with microservices, people sometimes break up a trivial app into a hard-to-follow spiderweb of components. Again, these are cases of people taking a valuable guideline for an end in itself. Always keep a hard-nosed pragmatic aim at the real goals: simplicity, clarity, generality.

From Java's "AbstractFactoryBuilderDelegator" insanity to "nanoservices", the common thread to me seems to be overzealous decoupling, to the point where I need to look in 10 different locations just to find out what happens during a single request.


> the common thread to me seems to be overzealous decoupling, to the point where I need to look in 10 different locations just to find out what happens during a single request

If it were decoupling in any meaningful sense, you wouldn't need to look in 10 different locations. But you need to look, because it's all related and tightly coupled!


It’s become dogmatic in Java to automatically create getters and setters for every single private variable in the class - include mutable objects like Lists and Maps (so a reference from a getter can actually change the referenced object). I’ve pointed out more than once that these getters and setters - usually auto-generated by an IDE or an XML compiler - serve absolutely no purpose and you might as well just cut out the middle man and mark the variables public at that point. Nothing makes a Java “architect” recoil in horror at the suggestion of just admitting that you’re not actually doing object-oriented programming and make the variables in your de-facto “struct” public, so I’ve given up arguing with them and I shrug my shoulders and create reams of pointless “getters” and “setters” now.


When you get more experienced most of these things make me laugh or cry (depending on the siuation); it does not matter what companies like FB, Google do; people on HN or Reddit will take it and do it to the extreme: we now ‘need to’ use React for everything; if it does not fit, just beat it with a hammer until it does. Kubernetes and microservices must be used for every tiny little part if the app even if it causes a lot more overhead in performance/memory use (computers are cheap and fast!) or debugging. Abstract almost everything! (Java + OOP, Javascript and the npm mess) to Abstract almost nothing! (Go w/o generics), Make everything reusable (left-pad), Rewrite everything in JS!, Rust!, Go! etc etc. Everyone is running after eachother and doing it more extreme and the end result is just as shit as if you would not have done that at all and just thought about it bit before opening some IDE and codegenerate you million lines of boilerplate with the instable and slow framework-du-jour. As an older coder I sigh when a codebase is taken out of the ‘mothballs’ even 6-12 months after creation and people cannot get it running because everything they used is outdated because the framework and library authors move fast and break everything all the time. And ofcourse it is in an outdated language / framework(Ruby on Rails is soooo pase) so noone knows anything , it uses the 358 most popular DSLs (350 unmaintained since january) at the time so unless you drank the same coolaid it is a nightmare spelonking adventure.

At least Dijkstra had sound mathematical reasoning for his arguments and wrote about them eloquently (and with good humor I may add); most of what is peddled in the hipster coding circles is a smooth talk by a gifted social media frontman that has no solid basis in anything besides that the person is popular. I do not even understand how people dare to put their name on complete messes like npm or one line npm packages unless it is a joke. I assume things like leftpad are in fact a joke; if they are not I would have to cry myself to sleep every night. So I just lie and say it is funny.

Only when someone codes something without any of that and it gets popular or makes a lot of money, people come with ‘it was best for this occassion’. The best example I can think off being anything Arther Whitney (k/kbd+) does; his softare makes a ton of money, it is faster, smaller and, in my opinion, easier to debug and uses less resources than most things I have ever seen passing here (including what people call embedded; no people, something with a gig of memory is not emdedded) and yet it pukes over almost all rules and styleguides that everyone loves so much. Not to mention: he does something a lot of programmers are jealous off (including me); he makes money with a programming language and is always used here as a counter example when people shout that programming languages that are not opensource and/or are commercial (even very costly) do not work.

I wanted to write one sentence; it became slightly more, but I guess most of it is on topic.


I'm probably going to take a lot of heat from all the young whippersnappers out there for this, but I absolutely love your comment about React. I'm going to save it. It totally describes my experiences with other developers. They want to use React to re-write major portions of our codebase that work perfectly well as is, just because React is super awesome! Can you guess how many of our customers have complained that our website isn't a single page application? I'll give you a hint, it's less than one. The devs will also make little teeny projects that would take less than an hour to write in Vanilla JS and make this big 20 hour development project that has a monolithic codebase that all the sudden needs routers and back button integration and url mangling and gigantic switch statements to draw the correct "page." Oh and don't forget you have to set up all that webpack and and compiling routines so that you can compile all that garbage into other garbage. And then you also have to do that build over and over again for every change. This is JavaScript. Script is in the name. It's not meant to be a compiled language. And contrary to our dev's beliefs, React does not run or draw faster than Vanilla JS, unless you are constantly redrawing the whole page in Vanilla JS, which no one does. I hate React.


Well I have 100s (literally because I work a lot with incubated startups) of anecdotes like that; when something is not react but makes bucket loads of money anyway (pro tip; those two things are absolutely not related at all), there will be someone who suggests a rewrite in React (native) for a reason that makes no sense to anyone but the people who are starry-eyed looking at their heros on Youtube giving low-tech presentations but with such an air of superiority. Well they DO work at Facebook (wait was that not some immoral company? Wait, wasn't the first version written in PHP which we all (...) hate here by the robot CEO everybody thinks is slightly insane these days? I guess that doesn't affect the tech now so let's ignore)!

Edit: I did not mean the last part sarcastic although it reads like that; I think Zuckerberg is a vastly overrated twat but that or that the product of the company currently sucks (yes yes IMHO, but many people agree and I mean currently; it could be great, but shareholder value) does not have anything todo with technical merit.


In the United States the management layer doesn't have a clue so if you don't keep up on React, GraphQL, etc etc - you're seen as a curmudgeon.

They're not the ones learning it but they're still attending all of the conferences for it and with a non-practiced engineering capability they're back to cargo cult BS.

Best to keep learning the new hotness or it's career suicide. Just remember, for almost any 9-5 it's about the _narrative_ of work more than it is about the work. Rewriting/changing huge portions of your already-working tech stack is job security. I truly believe a huge portion of engineers engage in their own "make-work" to justify their existence/paycheck.


Keeping up with something does not imply showhorning it into using it everywhere but I agree with you.


I was surprised by the number of gotos in the Python runtime. The link in the article was down so here:

https://github.com/python/cpython/search?q=goto

There's a lot of "goto exit" which is obviously a CPython runtime convention - fair enough. However there's plenty of classically bad code, example:

https://gist.github.com/ridiculousfish/ffe4fa2a17c831ed06e57...

These are old-school-bad gotos: `if` statements would do the job more clearly. Is this a broken-window phenomenon: one planted `goto` opens the door for the rest? Or is there a deeper motivation for this style?


To be clear, these are not the "classically bad" go-to that Dijkstra &al. railed against. C's goto is restricted to local jumps. You'll rarely see a go-to used in the classically bad style these days outside of something like a hand-written assembly interpreter main loop.

As for why you'd want to write a goto where an if would be semantically equivalent, there's a mix of style and human-level semantics: gotos are for "exceptional" cases, so the "normal" case looks flat and falls though directly to unconditional code, keeping the main logic flow at a consistent indentation level. (And apparently this heuristic is also baked into simple branch predictors, though I doubt that's something that comes up nearly as much.)


This feels less like 'folly of dogma' and more like these (C/C#) programming languages don't have the constructs to safely and properly express what the programmer is trying to do. 'goto exit' is an unsafe and dangerous version of Rust's '?' operator.

> We should be willing to break generic rules when the circumstances call for it. Keep it simple.

I argue we should instead iterate on the programming language design to make sure we don't need to make these kinds of trade-offs.


C++ has solid "cleanup" constructs so I wonder why CPython is in C instead of C++. Is it portability, compilation speed, complexity control, transition cost, something else...


All of those, but also python came out in 1991 when C++ was still in its infancy. Even if C++ had been mature though C is still a better choice, python is often embedded in other programs and doing that with C has a lot fewer headaches (simpler linking, better compiler support)


CPython is a relatively simple program calling for a (relatively) simple language. It's meant to be understood by lots of people, used within many different contexts and C is in much better position here.

At its heart it is just a bytecode interpreter, i.e. a loop with a huge switch. I don't think you need an elephant language just for that.


It's the problem of worse programmers seeing something used, and then misusing it because they don't understand the fundamental reasoning that led to its use in the first place.

Graceful error exiting is to this day an unsolved problem in computer science (at least as far as the popular languages are concerned). Even after GOTO elimination hit its stride, Knuth noted:

"Another important class of go to statements is an error exit. Such checks on the validity of data are very important, especially in software, and it seems to be the one class of go to's that still is considered ugly but necessary by today's leading reformers. Sometimes it is necessary to exit from several levels of control, cutting across code that may even have been written by other programmers; and the most graceful way to do this is a direct approach with a go to or its equivalent. Then the intermediate levels of the program can be written under the assumption that nothing will go wrong."


> I was surprised by the number of gotos in the Python runtime.

And yet, there is no goto/label in the Python language itself.


Well, not first class, but you do have access to longjmp.

    from libc.setjmp cimport jmp_buf, longjmp, setjmp


Dogma is a real problem in this industry. When OO came up suddenly everything had to be objects. So instead of writing

A=add(B,C)

You had to write

Adder AA; A=AA.Add(B,C)

I remember endless discussions about this and people always argued that functions are not OO whereas I said OO is about state so no OO needed for adding two numbers.

Same with goto. In FORTRAN it was an essential tool but suddenly it became illegal and you had to write complex if statements and other things just to get the same effect.

I guess software is so complex that it’s very to always understand all drawbacks and advantages of something so you have to live by a set of rules that usually work and follow them blindly.


> Adder AA; A=AA.Add(B,C)

Did anyone actually ever do that or is this just a huge red herring?

Also, the above looks more like a data flow language where adders are necessarily components in the wiring diagram (try building a CPU without adders!).

Add can be a virtual method on B (so B.add(C)), but then you really want Dylan-esque multidispatch on both B and C. But those kind of debates fell out of style with the 90s.


“Did anyone actually ever do that or is this just a huge red herring?”

Yes people did that and still do.


Borland C++ windows toolkit back in the early nineties would do that sort of thing, if memory serves.

Really deeply convoluted OO.


And way better than MFC or ATL ever were.


Yes, it was a real issue. I have once had the misfortune to have to work on such a project, it was the by far hardest legacy code base to understand of any I have seen (even compared to those written be self taught PHP guys or those from academia).


Cargo culting OO helps no one, but neither does dismissing it out of hand.

Here are some other things that Adder AA; A=AA.Add(B,C) has over A=add(B,C) that you are glossing over

1) You move the adding logic to it's own file, with other adding-only responsibilities

2) You allow injecting different implementations of Adder - maybe introducing a more efficient one in some cases but not others

3) You enable mocking Adder so that your logic that verifies that B and C are added can be tested without having to re-add B and C all the time.

Not sure how much of a concern this was at the time when OO paradigms were first being created, but in today's world where .Add might be a call to different cloud services, and where unit tests with good mocking are essential for any serious application, these concepts are essential.


> 1) You move the adding logic to it's own file, with other adding-only responsibilities

The code snippet above says nothing about where the adding logic is, other than that one places it in a method while another uses a straight function. Said method/function can live anywhere, unless you have a specific language in mind that prevents you from moving functions to a file?

> 2) You allow injecting different implementations of Adder - maybe introducing a more efficient one in some cases but not others

The thing the code wants to do is add something, so if that can be made more efficient, it can be made more efficient by implementing a more efficient add function. Why is it that you need to introduce something that is not an add function, in order to inject a more efficient add function? Why can you inject an Adder but not add()? What language is that again?

> 3) You enable mocking Adder so that your logic that verifies that B and C are added can be tested without having to re-add B and C all the time.

This makes no sense to me. If you add B and C, you test it once, so what's the deal with this re-adding? And why can't you mock add()?


I hope you are kidding. But yes let’s worry about injection for even the simplest things. Maybe we should start with language injection where you write code and later on you inject a different language. That would be the ultimate maintainable system.


> language injection where you write code and later on you inject a different language.

I (still) work with a Classic ASP code base.

I kid you not, that there are VBScript functions, which call JScript (not JavaScript) functions, which in turn call VBScript functions.

It is a terrifying and glorious mess.

edit:

I also worked with a system at one time that used node.js to create C# files on the file system, then use the C# compiler to create an executable and then run it. It was... not great.


The funny thing is that these things may still have been the best solution for the problem within the available time. I have created some things that in hindsight were horrific but at the time they were the best that could be done.


Bonus points for good sarcasm!


This rant kind of has it backwards, and Dijkstra's argument against GOTO has been the victim of its own success. The use of GOTO statements he was critiquing doesn't really exist in the wild anymore, so people see the tamed version of GOTO we use to break out of nested loops and so on, and wonder what the big deal was.

It's almost like an anti-vax argument. "This disease doesn't exist anymore, why are we cargo-culting by vaccinating against it?"

The argument in the original rant was about the limits of our ability to reason about code, and remains a deep and useful insight. The fact that we don't really have examples of non-structured codebases to point to in 2019 shows how essential the invention of it was to our work.


GOTO is an easy target due to its cultural notoriety (regardless of how it actually looked in the past), but the overarching argument is indeed against dogma. To quote Donald Knuth:

"In the late 1960's we witnessed a "software crisis", which many people thought was paradoxical because programming was supposed to be so easy. As a result of the crisis, people are now beginning to renounce every feature of programming that can be considered guilty by virtue of its association with difficulties. Not only go to statements are being questioned; we also hear complaints about floating-point calculations, global variables, semaphores, pointer variables, and even assignment statements. Soon we might be restricted to only a dozen or so programs that are sufficiently simple to be allowable; then we will be almost certain that these programs cannot lead us into any trouble, but of course we won't be able to solve many problems."

It's a problem as old as time itself: A smart person makes an observation based on deep understanding, and the rest, rather than go through the cognitive load of learning its fundamental roots, convert it to an easy statement of morality and dogma, shrouding it deeper and deeper with ceremony and pomp to create a mystique that none dare investigate.

Thinking is hard, and takes much energy. Most people prefer to keep that to a minimum, thus our superstitions, dogmas, cults, and priesthoods.


"GOTO is an easy target due to its cultural notoriety (regardless of how it actually looked in the past), but the overarching argument is indeed against dogma."

I agree.

But I think it's worth pointing out that if we're going to use reluctance to use goto as an example of dogma, it strengthens the anti-dogma argument even more to point out that the dogma isn't even correct on its own terms; the goto that the dogma is rejecting historically isn't the same goto that exists today.

Under many dogmas lies a kernel of truth. That kernel can be worth extracting, and is often quite enlightening, unlike the dogma.


> The use of GOTO statements he was critiquing doesn't really exist in the wild anymore

Very much this.

The programming world at that time was very much different than it is today. Fortran, which was thought of as a higher level language, had this abomination for IF statements that (a) required an arithmetic comparison and (b) had three possible destinations: one for the less than value, one for the equals value, one for the greater. It was extremely easy to get yourself into a full conceptual overload for any interesting program. Targets of branches were statement numbers. No language in wide use today has this problem.

>the original rant

If you go back and read it, it isn't so much a rant as a very logical reasoned description of the difficulties we were all feeling at the time as programmers. It was the rest of us (me included) that were part of the screaming crowd saying "Down with the GOTO totally!" Some of this unfortunate resulting hype was caused by the title of the article, not chosen by Dijkstra.

Knuth's response, as usual, has good humor in it. He notes a Dr. Eiichi Goto of Japan complained that he was always being eliminated. The concept of GOTO-less languages was also put forth in the XPL compiler written by McKeeman, Horining and Wortman, which incidentally was my introduction to compilers. Knuth also mentions Bliss, a very fascinating language, whose designers ultimately recognized that they had gone too far.

The author of TFA touches on another over zealousness in today's design thinking, and that is of object oriented programming. In another article or talk, Dijkstra is quoted as classifying object-oriented programming, along with other endeavors as part of a flourishing snakeoil business. A position which obviously enraged Alan Kay.


This comment, https://news.ycombinator.com/item?id=19962895, from a few days ago explains that any use of goto broke Dijkstra's theoretical formalization of program structure. Accordingly, even today's minimal use of goto would still be harmful as it would break his formalization model.

Fortunately we have better models that can handle goto. So it's not really like the anti-vax movement because the most substantial thing that changed isn't our use of goto (the disease burden in your analogy) but better formalizations (i.e. we have better medicine that makes fewer demands of the patient).


No, it explains that any use of the original goto operator would break the formalization model. No current language has the goto model that Dijkstra was advocating against; he won. To put it in modern terms, the goto in question would be exposing to the programmer the raw "jmp" assembly instruction. No high level language does that.

Indeed the best way to understand what they can and can not do is precisely to understand that formalization model and understand when you can't do a certain thing because it breaks it.


Um... can't C jump pretty much anywhere (at least within the same function)? Including insane things like into loops? Is that enough to break Dijkstra's model? (Or are you excluding C from "high level languages"?)


Well, I'd certainly exclude C from modern high level language, but I am obviously forced to concede that's not what I said before. :)

Instead, I'll have to confess ignorance; I didn't realize that C's goto was that insane but it seems it is. https://stackoverflow.com/questions/6021942/c-c-goto-into-th...

I'll have to ask the reader to suitably modify the post above. The tamed goto made available in most modern high level languages can't break the analysis, because the compiler will reject it, i.e. https://play.golang.org/p/2tA7Bpof611 (hit "run" to (try to) compile)


My compliments. That is perhaps the most gracious response I have ever seen to someone pointing out that your statement is (somewhat) in error.


It has been decades since I was tempted to "goto". This not because of dogma or "drinking the kool-aid". It is because I use an expressive language that has constructs that mean what I mean, so don't need to be cobbled up out of such fragmentary primitives.

That so much C code is littered with them just demonstrates a deep weakness in C, and not any kind of fundamental principle. I admit surprise that C# turns out similarly weak.


Would you also consider the assembly code that is generated by your high level language to be so "littered" with jmp instructions, arising from a "deep weakness"?

It's one thing to prefer to work with another abstraction, but this is awfully judgmental phrasing that denies or unfairly maligns a usefulness and necessary ubiquity at a different level.


Yes, assembly language is a language of deep weakness. There's a reason we don't use it unless we have to - it's too hard to write anything in assembler. In fact, assembler weaker than C - in C, you can usually avoid goto if you want to bad enough, but in assembly, it's impossible.


> it's too hard to write anything in assembler.

And yet, everything you run is written in it. (By a compiler or a JIT, sure.) The goto is a useful abstraction for its layer. It doesn't have to be your favorite layer, but it's there, and ubiquitous.

I feel like discussions around memory safety are similar. I can't get a lot of people around here to admit that in order to be blessed with memory safety at one layer it needs to not exist somewhere else, and that's OK.


You seem to be having a different discussion than most of the rest of us. You're claiming that it's fine for its layer, and the rest of us are saying that we don't want to work at that layer.

Yes, jmp is useful at the assembly layer. Yes, everything eventually gets run on assembly (on the way to microcode, and then transistors, and then quantum mechanics). That doesn't mean most of us want to work there, though.

And goto is the same. Having seen that we don't have to work in that way, we don't want to work in that way. We can work with larger abstractions so that we don't have to deal with that kind of detail.


> You're claiming that it's fine for its layer, and the rest of us are saying that we don't want to work at that layer.

Correct. This is what I said all along. Glad to see you're up to speed.

Meanwhile, every time you write an if statement ... May you think, acknowledge, appreciate: "I'm adding a goto!" Or possibly several of them. [I am pretty sure I have had discussions with people who say they are also against if statements, but I don't think that's quite as common.]


Everyone is always perfectly and completely aware of the jmp instructions that implement their if and while statements. Talking about them does not make you cleverer than anyone else.

What you are missing, and is the fundamental essence of the whole discussion, is that these jmp instructions don't just jump to any old place, like a goto. They jump to only very specific places corresponding to the boundaries of our if and while statements. The compiler will never generate an undisciplined branch, absent an actual goto in the source.

Beneath the jmp instructions there are register transfer machines, and beneath them are logic gates, and beneath them are transistors and wires, and beneath them are charge carriers and fields, and beneath those are atoms and crystalline structure.

At each level you can find the correspondence with structures in the next level above and below. In no case does the lower level violate the structural rules of the next level up, despite that in principle, it could. That is how we get systems that can be understood, and work.


> Talking about them does not make you cleverer than anyone else.

Please don't assume that any notion of my own cleverness is the crux of what I am saying or has anything to do with it.

> At each level you can find the correspondence with structures in the next level above and below. In no case does the lower level violate the structural rules of the next level up, despite that in principle, it could

Disagree, especially since you went so far as to talk about the physics. There is a lot of order created from chaos, and the structural rules are largely fiction, taking some effort to impose them.


But they are, in fact, imposed, or you would not be able to read this; thus, fictional only in that they were invented.

But in any case, and to the point, there is nothing fundamental about jmp instructions. They, and the sequential execution they interrupt, are a way to help organize state machines. It is a triumph of decades of effort that we have succeeded in making state machines of such complexity behave in comprehensible ways, and a deep failure that we have not found any better way.


Machine instructions are the archetype of fragmentary primitives. We use better languages for good reasons.


Swift, for example, has a nice construct called `defer` that runs some code immediately after the current scope exits -- no matter the means of exit. This is pretty much all I would want to use `goto` for.

I sort of disagree that C is "weak" -- I think its simplicity, which even includes `goto`, is a strength. But I agree that other languages can certainly, within their own contexts, come up with nice things.


C# is one of the very few modern languages that has kept goto. A similar comparison in most other modern languages does not make sense because you cannot even use it.


It's funny how Go limitations made me go back to using GOTO statement to deal with errors in an http handler.


There is some nuance here that the author misses. Goto jumps to a location in program text. Other techniques, like (single shot) continuations, jump to program state. The former is dangerous. Not just because you can write spaghetti code, which was the original critique against goto, but because you can make jumps that have no meaning. For example, you can jump to a location that has not been initialised yet. With continuations you can still write complicated control flow, but you can only make jumps that are meaningful.

So I argue the issue is not with goto per se, it is with the lack of better tools provided by the languages in question to express complicated control flow. Like many things in programming languages, better tools are well studied but not available in most mainstream languages, which are stuck in ~1980s paradigm.


> When Linus Torvalds started the Linux kernel in 1991, the dogma was that "monolithic" kernels were obsolete and that microkernels, a message-passing alternative analogous to microservices, were the only way to build a new OS. GNU had been working on microkernel designs since 1986. Torvalds, a pragmatist if there was ever one, tossed out this orthodoxy to build Linux using the much simpler monolithic design. Seems to have worked out.

Except that desktop is the only place standing where microkernel haven't fully catched up, and even then macOS and Windows have a kind of compromise between monolithic and microkernels, with plenty of stuff running on userspace, increasing with each release.

Even Project Treble pushes several drivers into userspace processes, with Android IPC to talk with the kernel layer.

Had Hurd gotten the same love from IBM, Compaq, Oracle, Intel,.... as Linux did, and it might have turned out quite differently.


Having saved myself a headache earlier by parsing some HTML with regex, I'm appreciating this post. On the other hand, if you don't obey dogma, it may impair the delivery of your cargo. Everything is a tradeoff.


There's a difference between admitting that corners need to be cut sometimes and arguing that cut corners are _correct_.

You didn't "parse HTML" with a regex; you created a solution to fix a very narrowly circumscribed problem by pattern matching on some string inputs. Big difference. Were an easy to use HTML parser (or likely lexer) readily available there'd be little excuse to cut corners as the proper solution would likely be far easier to prove correct (formally or informally) than the regex hack. (Full disclosure: I've written an HTML5-compliant streaming HTML lexer precisely so I--and others--would have less reason to depend on regex hacks in security scanners.)

The article says that the Linux approach proved good enough. No, it didn't. Linux has turned into a nightmare of security vulnerabilities, on par with Windows 95, just as originally prophesied. We only tell ourselves it's good enough because we're unwilling to admit we're where at. Remember when Linux and open source were paragons of security? Man, how times have changed....

But now we have a formally verified operating system in seL4, which is... [wait for it...] a microkernel. Of course, it's difficult to use as a general purpose OS, though not far from where Linux was in the 1990s. In time we'll get there. In the meantime no good comes from lying to ourselves about the nature of our solutions.


> Remember when Linux and open source were paragons of security? Man, how times have changed....

I remember a time when Linux was a paragon of security compared to the corresponding Windows version, Windows 95. I do not remember a time when Linux had no vulnerabilities. What happened is not that Linux got worse but that Windows got much better.


> Linux has turned into a nightmare of security vulnerabilities, on par with Windows 95, just as originally prophesied.

What exactly are you talking about ? What was 'originally prophesied' ?


That monolithic kernels are more susceptible to attack because they're less resilient to programming errors. This was one of the arguments in the famous Linux v MINIX debate(s), but the notion that microkernels were more secure goes back to before the term microkernel was even coined (i.e. before 1980s).


Say hi to Tony the Pony for me.


Isn't "keep it simple" also dogma?


It can be. Sometimes the requirements are complex so any expression of them in code would also be complex. Then another person sees this complex code and automatically assumes that it is bad and should be made less complex while completely ignoring the fact that it would break the requirements. And to the person who is now going to interject that the requirements should be simpler. I am all for that if possible, but in many cases it is not. E.g., if they are written in contracts. Of course, bad programmers will create complexity where none is needed.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: