Hacker News new | past | comments | ask | show | jobs | submit login
Anti-if: The Missing Patterns (joejag.com)
130 points by joejag on June 9, 2016 | hide | past | favorite | 71 comments



I'm disappointed that of all the patterns presented, most of them actually increase the complexity to some extent, and none of them are the use of a lookup table/array. IMHO that is the "real anti-if pattern" here, and it can immensely simplify code. I've taken long, convoluted nested if/else chains with plenty of duplication and turned them into a single array lookup. A bonus is that performance and size often benefit as well, since the code is far less branchy.

http://www.codeproject.com/Articles/42732/Table-driven-Appro...


I do this a lot too. Often long chains of if statements are really performing a manual mapping from one set of values to another. Just keep a map and perform a lookup! Easier to read, easier to change, and easier to extend since the mapping is a data structure instead of code.


Efficiency-wise I agree, but if I don't care about efficiency then I often prefer an if-chain (or better still, a switch-case).

The point is that the desired mapping is known not just at compile time, but is explicitly known to the programmer. So it is clearer if it right there in the executable code rather than hidden away in some moving part (the mapping data structure).


switch-case is often compiled to a table lookup, but if your expressions are constant (at least for the part where the table is being used), the latter is definitely less verbose.

So it is clearer if it right there in the executable code rather than hidden away in some moving part (the mapping data structure).

Following several levels of nested if/else is "clearer" than going to the right array index? I don't know what you mean by "hidden away in some moving part".


If you have several levels of nested if-else (as opposed to a linear chain of "if"-"else if"), then you probably need a more complicated table than a linear array.

The moving appart that gets hidden away is the table (or more often, Hack mandictionary) that is stored somewhere in memory. If you are lucky, it is stored statically. It is never defined in quite the same place as the decision that it implements.


Personally I don't like switch statements at all, especially the ones that require manual control flow management. I can vividly remember multiple bugs over the years caused by forgetting to add "break" to a switch statement.

Unless you're optimizing for performance (which is rare in my domain), control-flow statements are a smell and should be avoided. Each branch has some "implicit" state that is being fragmented and has to be manually managed by the programmer.


C# will optimise something like >7 items in a switch statement to a lookup. Funny how this doesn't play nice with code contracts!


"turned them into a single array lookup."

Midway between nested conditionals and lookups is using flags and a tablesque series of conditionals. Redundant but easily understood (and therefore testable).

I first learned about this technique from Code Complete (linked in the TFA). IIRC, McConnell used the term "truth tables".

"...since the code is far less branchy."

Code construction GoF style Design Patterns just introduce another level of indirection. That's it. (Insert cliche here.)

"Design Patterns" are overused, especially in the Java world (eg J2EE/Spring, Service Providers, Strategies, DI, IoC).

The "design" part is figuring out the least complicated balance between conditional branches and call stack depth, between composition and inheritance.

Rant over.


"Polymorphism" kind of counts, it replaces the switch-case with a vtable lookup.


> Context: You have a method that takes a boolean which alters its behaviour

> Problem: Any time you see this you actually have two methods bundled into one. That boolean represents an opportunity to name a concept in your code.

This is only true if you're using literal booleans at you call sites. If the booleans are coming from somewhere else, like user input, you just moved your single if statement in the method out to every call site.

> Context: You are calling some other code, but you aren’t sure if the happy path will suceceed.

> Problem: These sort of if statements multiply each time you deal with the same object or data structure. They have a hidden coupling where ‘null’ means someting. Other objects may return other magic values that mean no result.

Passing a default value works, but in Java 8, returning an Optional is cleaner. And if you wanted a default value, you can still do

    repository.getRecord(123).orElse("Not found");
Edit: Also, the coping strategy link [1] he gave to remove exceptions sounds a lot like restarts in Commons Lisp's condition system.

[1] https://silkandspinach.net/2014/11/06/on-paperboys-newsagent...


Common Lisp to the rescue!

   (defmethod make-sound ((b bear) (loud-p t)))
    ;; bear makes loud sound
     ...)

   (defmethod make-sound ((b bear) (loud-p null))
    ;; bear makes quiet sound
     ...)
If we call (make-sound bear-instance nil), the second method is invoked because loud-p is specialized to the null class, whose only instance is nil. (There is a type nil also, whose domain is the empty set: it has no instance.)

If we call (make-sound bear-instance <any other value>), then the first specialization takes over. It's specialize to T, the grand superclass, so it is a fallback that takes anything. Any value other than nil serves as a Boolean true in Lisp.

It's a magic of Lisp that T stands for "any type" as well as "true". :)

I don't know whether treating Booleans as method parameters like this is good design, but I don't see any immediate down sides.


Also known as multiple dispatch, which is supported by many languages: https://en.wikipedia.org/wiki/Multiple_dispatch


Are the semantics to this comparable to Haskell's pattern matching on function arguments?


I think if your boolean is coming from user input then refactoring your method to accept user input, and then wrapping your method with a helper function that does the user input call would still be good advice.

At some point applying "anti-if" does involve major refactoring, so the outer style of your code might need to change a bit.

Of course like all design patterns it's more of a thought experiment than a total prescription IMO. But taking the thought experiment to its logical conclusion can also yield interesting results.

Some people might throw out strawman arguments about design patterns, but I'll end up reading the strawman and be like " yeah that does sound like a good idea". It's pretty hard to overestimate the value of regularity in code.


> like user input, you just moved your single if statement in the method out to every call site.

I guess the author would argue that the boolean should be pushed up to UI code. Arguably that domain logic belongs in the UI. Possibly a utility method in the UI package.

That being said, with all patterns there are exceptions. Following something to absolute purity is going to result in unclear code - polymorphism purity can rapidly cause a spaghetti codebase. Even GoF patterns can easily result in messy code. All patterns are blueprints, not rules.

On the topic of boolean values, I'm becoming more and more convinced by the anti-bool pattern. Normally this involves using an enum (usually bit field) when you first declare a parameter.

    FileOptions options = FileOptions.None;
    options |= temporary.checked ? FileOptions.Temporary : FileOptions.None;
    options |= encrypted.checked ? FileOptions.Encrypted : FileOptions.None;
    FileUtils.createFile("name_temp.txt", "file contents", options);
createFile now includes an 'if' and is at odds with anti-if (which is a pattern that I agree with). This is exactly my point, irrespective of whether you use multiple methods (createFile, createTemporaryFile, createEncryptedTemporaryFile, etc.) or polymorphism (File, TemporaryFile, etc.); you're going to end up with unclear code. Increasing clarity is the whole purpose of patterns. Composition (which bitfields are a form of) is the clear winner here and so the anti-if pattern has to be eliminated.


This is why I never try to give prescriptions on code style.....whenever I think I have something good, someone else comes along with something better. And when I see someone else's advice, I can think of something better. And after I finish writing my own code, I can always look at it and think, "Oh, I could have done that better."

See also: efficiency. There's always a way to make it more efficient.


I'm afraid the article has left me unconvinced. I'm open to having my mind changed on the matter, though.

The point of passing boolean params instead of named functions is that most of the time there is shared code between the two paths and not literally all the code is enclosed in either the if or the else block. If the author was intending just to restrict to that one specific case where there was no overlap whatsoever, then I guess I would agree but I'm not sure how often that happens.

Polymorphism has some advantages but it's certainly got it's own troubles in terms of understand-ability. The author links to an article which even states "I think that we hate conditionals more than we should. Introducing polymorphism too soon will complicate your code, making it hard to understand and test." It then goes on to give an example where it says polymorphism is actually the right choice, to avoid two switches. I'd say that at least this trivial example would be best handled by some data-driven approach. Look up salaries in a database instead.

The inline statement one can certainly be better in some cases, but the example given is quite poor. What does "foo && bar || baz" mean? Well, now I have to remember the order of operations ... 'or' after 'and' I guess. Surely this is easier to misread than the if statement example, which in my mind closely matches how I think. At least it should be "(foo && bar) || baz". But even then, that only works when the actual return type is boolean and there are no state changes made.

I don't really see how using an Optional type would remove the if(null) checks. It just makes the meaning explicit, which is good but not remedying the original problem.

5 seems nice.


> The point of passing boolean params instead of named functions is that most of the time there is shared code between the two paths and not literally all the code is enclosed in either the if or the else block

If you have this

    method(bool arg)
    {
      // block A
      if (arg)
      {
        // block B1
      }
      else
      {
        // block B2
      }
      // block C
    }
you can replace it with

    method1()
    {
      A();
      B1();
      C();
    }
    
    method2()
    {
      A();
      B2();
      C();
    }
where A(), B1(), B2() and C() are new methods that contain the previous blocks. This makes the code clearer, IMO.


>This makes the code clearer, IMO.

Depends on how much state you need to pass between A, B1, B2 and C.


It also depends on the semantic value of A, B1, B2 and C. If you cannot express them as senseful steps of your overall algorithm, then you are bound to end up with very confusing method names like calculateFooAndCreateBarAndAlsoInitializeBaz, or (even worse) prepareForFoo.

EDIT: By the way, an anecdote. Years ago, I was working with an offshore developer who was producing huge scripts without any modularization whatsoever. In particular, we told him that a certain 2000-line script needed to be modularized, so he should break out steps into functions etc. He produced a new version containing two functions of 1000 lines each. The first function would prepare everything, then tail-call the second function with 27 arguments, most of them large intermediate data structures.


Could have been worse: He could have made 1000 functions two lines each.


If the amount of state you need to pass is that significant, that is another code smell that needs its own resolution.


> I don't really see how using an Optional type would remove the if(null) checks. It just makes the meaning explicit, which is good but not remedying the original problem.

If you always use `Option` (or whatever equivalent your language/base library provides) and banish `null` from your source code you've effectively avoided all null pointer exceptions and can therefore remove all null checks.


Well, no, you still need the null checks. You've eliminated null pointer exceptions by guaranteeing that null is always checked for before the value is used -- you're still checking, but you're now immune to unanticipated crashes. (That would have resulted from null dereferences, that is. You can still have other bugs.)


You need checks but only in the cases where you need checks. More importantly, nullable values are very obvious in the code (and should very much be the exception rather than the rule), so the cases where you need to think about "what if that's null" are cases you actually think about.


> I don't really see how using an Optional type would remove the if(null) checks. It just makes the meaning explicit, which is good but not remedying the original problem.

Yeah, the benefit is not really the Option (which you still have to check), the benefit is when there's no Option. I'm not really sold on there being a benefit in Java (looks like those are the code samples in the article?), but in languages where `null` is not a part of the type, and your parameter is just a plain, `int`, say, then you know you don't need to check if it's `null`. In other languages, you either have to always check, or rely on a convention or code flow that has the null checks happening earlier.


> The point of passing boolean params instead of named functions is that most of the time there is shared code between the two paths and not literally all the code is enclosed in either the if or the else block. If the author was intending just to restrict to that one specific case where there was no overlap whatsoever, then I guess I would agree but I'm not sure how often that happens.

The cases where there's a lot of commonality are cases for polymorphism instead. If you have something like:

     def doSomething(useCache: Boolean) = {
       //long and complicated function
       if(useCache) cache.lookup(foo) else calculate(foo)
       //more long and complicated stuff
     }
then it's probably worth breaking that out as

    trait FooProvider { def get(foo: String): Foo }
    object CachedFooProvider extends FooProvider {
      def get(foo: String) = cache.lookup(foo)
    }
    object CalculatingFooProvider extends FooProvider {
      def get(foo: String) = calculate(foo)
    }
and passing the FooProvider instead - that way you pass something with a clear semantic meaning rather than a bare Boolean.

> I'd say that at least this trivial example would be best handled by some data-driven approach. Look up salaries in a database instead.

Very much disagree. Every time I've seen a program do logic based on what was in the database it's been very hard to debug or reason about. Anything you can possibly do to make it unit-testable instead of needing to test against a prod database dump is worth it.

> Surely this is easier to misread than the if statement example, which in my mind closely matches how I think.

Disagree. The if looks like control flow when it's not actually control flow. If you're just doing Boolean logic, make it look like Boolean logic.

> But even then, that only works when the actual return type is boolean and there are no state changes made.

Yes, that's exactly the point. It's well worth forcing those things into different functions, so that you can inspect what's going on. Particularly when debugging, that means you can step over the conditional and see what the result was, rather than having to step through each if because you never know when one of the branches might actually do something.

> I don't really see how using an Optional type would remove the if(null) checks. It just makes the meaning explicit, which is good but not remedying the original problem.

Optionals have polymorphic methods so you can express most common use cases directly rather than having to branch, i.e. rather than:

    val user = if(null == userId) null else userRepository.lookup(userId)
you can just do

    val user = userId flatMap userRepository.lookup
map/flatMap/foreach each have their own specific semantics so it's easy to see what's going on, whereas an "if(null == userId)" could mean anything. (If your logic really doesn't fit into any of the standard use cases then you may need to use an if even with an optional, but such cases will stand out when reading the code - as they should).



This is actually a better read than the OP. Thank you.


I only read bits and pieces of "Practical Object-Oriented Design in Ruby" by Sandi Metz, and the occasional article, but all of it has been surprisingly beneficial to my day to day coding.


I had a professor in university who was frequently saying he developed a thousand application and he didn't use any if.

I know it's extreme but is there any way to reduce (lets say 5 to 1) conditions?

I think he was developing Fortran apps.

Anecdote: This very same professor asked us to do a matrix operation (I don't remember what) without using if once. He said it would be faster than using ifs. Many of us couldn't. Then he revealed his solution, he was getting first indice of first row and doing some operation and getting the last indice of last row, then second indice of first row and so on. One of my friend told that if he wrote the algorithm plain using ifs, it would be faster. Professor told it's impossible.

In the end of the day the version which was written using ifs was a lot faster than the version without ifs. Because of the changing indices cause a lot of cache misses but getting matrix elements sequentially used cache correctly.


The performance cost of ifs are not that bad. Especially if it means you can read sequentially from the memory.

It however, do represent branching, and in a hotspot it can be more efficient to simply calculate both the if and the else and just choose the right value, rather than having a branch misprediction.

In any case, always profile, and identify issues before doing any optimization.


Programs with lots of deep branches are generally not well thought out and prone to bugs. In my view, every branch represents an "exception" to the main execution path. The explosion of combinations of taken/not taken branches is what brings your program to an unpredictable state. The performance side is debatable, but with fewer branches a program is definitely simpler and more elegant.


All of the proposed solutions are deficient:

> Pattern 1: Boolean Params

This solution suffers from what we typeful programmers call “Boolean blindness”: https://existentialtype.wordpress.com/2011/03/15/boolean-bli... . When you compute a bit, what you are actually interested in is the meaning of the bit, which isn't included in the bit itself. For example, if the bit is set, then “x” is less or equal than “y”; if it's unset, then “x” is greater than “y”.

> Pattern 2: Switch to Polymorphism

Dynamic dispatch (what the author wrongly calls “polymorphism”) can be used for open-ended case analysis. But it suffers from two drawbacks: (0) Unless you have multimethods, case-analyzing two or more values at the same time is a bitch! (1) Even if you do have multimethods, the open-ended nature of this whole business makes static exhaustiveness checking (making sure that no case is missing) impossible.

> Patte[r]n 3: NullObject/Optional over null passing

Null objects are still falling back to pattern 2, and optionals are severely underpowered in languages without pattern matching and compile-time exhaustiveness checks. For instance, one of my favorite patterns is implementing operations (say, “merge”) on non-empty collections, then extending them to possibly empty ones:

    (* pe = possibly empty *)
    datatype 'a pe = Empty | Cons of 'a
    
    (* extend a merge operation on non-empty collections
     * to possibly empty ones *)
    fun pointed _ (xs, Empty) = xs
      | pointed _ (Empty, ys) = ys
      | pointed op++ (Cons xs, Cons ys) = Cons (xs ++ ys)
    
    fun merge (xs, ys) = ... (* possibly complicated *)
    and mergePE args = pointed merge args
How do Java-style optionals help?

> Patte[r]n 4: Inline statements into expressions

This only makes the problems associated to Boolean blindness even worse.

> Pattern 5: Give a coping strategy

What if there is no sensible default value? Just... no.

---

What you really need is algebraic data types, pattern matching and exhaustiveness checks.


Uhm. No, no, no, no and no.

> [Pattern 1] Solution: Split the method into two new methods. Voilà, the if is gone.

And so is parametrization.

> [Pattern 2]: Solution: Use Polymorphism. Anyone introducing a new type cannot forget to add the associated behaviour.

OTOH you make extensions more complex. I usually write in lisp and DO NOT use polymorphism for this but etypecase in the base implementation of a method (switch/case where the default case automatically throws an error). This allows people who extend my code to implemented a method specific to a type that my base implementation handles and thereby overriding my implementation with minimal effort.

> [Pattern 3]: Solution: Use a NullObject or Optional type instead of ever passing a null. An empty collection is a great alternative.

This is, IMHO, not relevant at all - the decision how null values are handled simply has to be documented so programmers can make an informed choice. The only exception I accept is an empty collection which I'll always prefer over null, assuming that all other code handles empty collections gracefully.

> [Pattern 4]: Solution: Simplify the if statements into a single expression.

Works for the contrived example (and we all know that the code is really bad). For more complex code I rather follow the logic line by line instead of mentally disassembling a 10 line boolean expression.

> [Pattern 5]: Solution: Give the code being called a coping strategy.

No, just no. One does not aim at removing exceptions. Those are to be handled by a higher layer who actually knows how to handle it. If one uses default values one still has to check on that layer with the added burden of using a default value that is guaranteed to never be returned as a real value so we can make the distinction between record found/record not found.


Misko Hevery did a Google Tech Talk about this some years ago, he also use a very similar example with switching on the birds.

https://www.youtube.com/watch?v=4F72VULWFvc


This. If there's anything that's made me a better programmer it's Misko's talks.


One thing I've always done with if statements is when I have an if with a negated condition and no else case

    if (!condition) {
      // do stuff
    }
When I want to add an else case, I swap the order so the condition is no longer negated.

    if (condition) {
      // do other stuff
    } else {
      // do stuff
    }
To me it's easier to read a non-negated condition and the else case then feels more natural. (It avoids having to internally think about else case as "condition not not satisfied".)


Interesting. I often (but not always) prefer the if (!condition) {} at the top because I see it as 'bailing out' of the function if conditions are not met.


If I'm doing an early return, I usually don't put the rest of the function inside the else case since I feel it creates an unnecessary level of indentation.


Using 'if' indicates programmer's doubts about what a program should do. These are usually rooted in low self esteem and troubled relationships with parents.


if (myParentsLoveMe) selfWorth++;


Getting rid of if/else is nothing but moving the if/else to somewhere else.


This is only true in a very technical sense.

    function getColor(str){
       if(str=='blue'){ return '#0000ff'; }
       if(str=='red') { return '#ff0000'; }
    }
    
    getColor = {
      blue: '#0000ff',
      red: '#ffoooo'
    }

The second one also has if statements inside the implementation of the hash map, but it isn't actually polluting your business logic. The question is no longer "what do you do when the input is like this or like that", but "lookup the corresponding output".

The distinction is nice because I know that getColor[thing] will likely have no side effects. I can treat it, on a theoretical level, strictly as a map.

Of course if the problem is "implement this on a machine without branching" this isn't super useful but most problems are not that.


I agree. For example when you are using some form of polymorphism, the compiler/runtime are handling your if/else for you. But, still in terms of understanding what a piece of code is doing you will have to consider all the possible branches whether they are explicit in code as if/else statements or hidden under some abstraction like map/dictionary.

Edit: The conditional abstractions like map are useful in the sense that the conditions that they support are very simple, in case of map it is if supplied key equals that key, whereas if/else statements can be used to create any kind of complex conditions.


If you want extreme case of removing all ifs, see how ifTrue: and friends are implemented in Smalltalk (ie. true and false are instances of different classes), although essentially all compiler implementations turn that into normal conditional branches in generated bytecode.


Also, from the functional programming side, Church Booleans:

    function  true(x, y) { return x; }
    function false(x, y) { return y; }
There's an example of Church encoding used in anger at http://www.haskellforall.com/2016/04/data-is-code.html


I think I'm looking at an implementation of ifTrue[1], but it's a bit dense to someone who's not familiar with Smalltalk. Maybe you could walk me through it?

[1] - http://git.savannah.gnu.org/gitweb/?p=smalltalk.git;a=blob;f...


That's the compiler part that optimizes the elegance away :). The interesting part is here [1], note that it will work without any support from the compiler.

[1] - http://git.savannah.gnu.org/gitweb/?p=smalltalk.git;a=blob;f... and http://git.savannah.gnu.org/gitweb/?p=smalltalk.git;a=blob;f...


Much more straight forward. If I'm reading it correctly, True and False are subclasses of Boolean and each of which implements a complementary pair of ifTrue and ifFalse methods? Elegantly symmetric. :)


All these complicated examples and no mention of functional pattern matching or pattern matching at all.

IMHO These are by far the two most important ways to get rid of complex code embedded with ifs.

One very good example of this is erlangs factorial method.

   factorial(0) -> 1;
   factorial(N) -> N * factorial(N-1).
There are many other nice examples at the erlang documentation:

https://www.erlang.org/course/sequential-programming#funcsyn...


Unfortunately, pattern matching isn't available in many languages (or at least, including a new dependency for the sake of removing `if`s may not be worth it).

On the other hand, the article recommends using sub-type polymorphism, which is also unavailable in many languages.


True functional-style pattern matching is only slowly coming into imperative languages right now, most prominently in Rust.


Generally speaking, for every refactoring you could do, the opposite move is sometimes useful too. For example, inlining a function and extracting a helper function could both be a way to simplify code, depending on what you're trying to do.

Similarly here, I think it's good to know how to get rid of an if statement, but keep in mind that sometimes you might want to introduce an if statement. Sometimes you want polymorphism and sometimes it's better to do the same thing as pattern matching.


The first "anti-if" should be to resist adding unnecessary features, on every level. The less branches, the less possible ways to fail.


I like the addition of the "tolerance" bit to examples, more blogs should have a "when not to use" section.


I don't know what's so "Missing" about the polymorphism anti-if: I've seen it ruin plenty of code.


> Patten 3: NullObject/Optional over null passing

The solution here disperses the responsibility of keeping sumOf safe to its callers, all over the place, instead of a single location in sumOf.


Agreed. The sumOf function is now made more brittle by that fact, and would be unworkably so in a publicly-available API. This is easily fixed by a try-catch block if one finds that they are indeed allergic to if statements, although that too is a form of flow control. If you are programming in Go, you're going to have to bite the bullet and use an if.

(I suspect the author would prefer jumping in a lake over programming in Go.)

I think it is also worth mentioning that passing a new ArrayList over a null value is somewhat wasteful of memory. Though the reference to ArrayList will cease to exist, its allocated memory will hang out until the next GC phase. In the words of Bartleby, I would prefer not to.

I get the principle that is being advocated, but the solution code doesn't pass practical muster for me.


In this very specific instance, you could easily and idiomatically reuse a single instance of an immutable empty list:

https://docs.oracle.com/javase/7/docs/api/java/util/Collecti...

No dynamic heap allocation necessary.


It is more brittle, sometimes the best you can do is to prefer empty collections over null. Some environments do offer a solution by using parameter annotations. Personally I do a lot of C# with resharper where you can add the [NotNull] attribute to a parameter, which enables some compile time checking.


This reminds me of something I read on HN a while back and forgot to bookmark. (Naturally I've been unable to find it since.)

I think the article was written by a company who made a static code analysis tool. They described how there wasn't a large difference in quality between FOSS and proprietary code, except for one detail: FOSS code used "else" statements much less.

Anyone have any idea what I'm thinking of or if I'm even remembering it correctly?


That does sound familiar. Was it possibly from either the NDepend blog, or more likely, the PVS-Studio blog?


Did you try http://hn.algolia.com ?


And if there is no record in the repository, what should I do with the default value? Check if the returned value is different from the default? I actually don't see any real benefit in cleanliness. The only thing that could actually happen is using the default in some way that could eventually bring the data in some strange state. An NPE would at least shout "YOU MESSED SOMETHING UP" in my face.


Pattern 5 is all about removing semantic value from null references, but has nothing to do with if statements... In fact, the if-else remains (albeit hidden because of an early return inside an if, which is even worse in my opinion).


I'm surprised there was no mention of the Specification pattern.

https://en.wikipedia.org/wiki/Specification_pattern


My god what an abomination of programming simplicity. Do people actually write code like this? It's as if someone said you must use interfaces and you must use objects for even the simplest boolean logic statement. I think I would claw my eyes out if I had to read and write code like this on a regular basis.


I'm quite supportive to the anti-if pattern but the presented example sounds me more like "code smells" and doesn't really makes me happy to me anti-if is a great winner when you can safely compute all the execution path in advance (in a way where you have not to nest ifs, of course!) but in situations where your code have to react to something that happens "on the moment"... well, probably "if" are more convenient (not always of course) anyway thanks to have raised the point, this topic is always interesting to me


I was hoping for some examples that reduce costs incurred by failing branch-prediction in the CPU.


[flagged]


Please keep snark and personal swipes off HN.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: