Why does anyone have to tell people not to do this? How does it enter anyone's mind as a thing to do in the first place? I've been known to go too far to minimize nesting. I get twitchy at the second level. By the third my brain is trying to crawl out my eyes and strangle me, even on code that doesn't have a potential exit point at every level.
I do it in C code instinctively; it's very useful for instrumentation, debugging, and resource management in straight C to have a single return point. When I get too far to the right, that's a signal that it's time to further decompose my functions; that signal is also useful, which is another thing that keeps me doing it.
But that style doesn't make much sense in Golang. It makes even less sense in Ruby and Python, which have structured exception handling; if you're coming from Pythonistan, the idea of nesting conditions is probably entirely alien.
This style is used in the linux kernel as well; there can be multiple resources acquired that sometimes need to be released in reverse order, so there are often multiple labels that you can goto, depending on how much stuff you have to unwind before you return.
Yes, CPS is essentially that and available in any modern language, or can be emuated by setcontext(2)/getcontext(2). However, it can become a mess very quickly.
Another option is monadic style, which will pass error checking along. I believe it will work well for this example, and can be implemented in Go, most likely (I don't know Go, bur it seems so).
Of course, the real problem with this code is that it is not decomposed properly. Nested error checking is the first sign of it, as somebody else already rightfully noted in the thread.
Quite a few people write deeply nested code for some reason unless you tell them not to. Maybe it depends on how your brain works. There is surprising variance within the human population.
I find deeply nested code ugly and unreadable, others nest eight levels deep and love it. Some people even claim they find parentheses soup (LISP-like syntax) readable. Personally, I cannot comprehend that.
I find Lisp-like syntax highly readable - more readable than C-like syntax for sure.
I find that anything more verbose (EDIT: I'm having difficulty thinking of the right word for what I mean here - hopefully you understand anyway) is useful while you're writing the code, but actually increases the difficulty and length of time to understand what's going on when you come back to it.
Symbolic representations of relationships - including parentheses among many others - just make more instant sense to me. To the extent that a characterisation of the Lisp family as "parenthesis soup" just does not make sense to me -- it implies an arbitrary jumble of symbols, while for me, as none of the parentheses could possibly be moved, they're in a perfect and immovable pattern.
I'll admit, though, that without [ and ] I would sometimes be more than a little lost.
I think I might enjoy an APL-like language if I ever had time to learn it...
Even the correct version is not that good. The multiple checks on nil is an obvious pattern and as such should be abstracted away. Haskell does it with Maybe but as long as you have function as first class object, it should be doable. Like in Python, Ruby, etc.
> Even the correct version is not that good. The multiple checks on nil is an obvious pattern and as such should be abstracted away.
From a pedagogical point of view, showing the first transition from the anti-pattern is a good practice, because once you learn that, those first transitions are composable. So, while I agree that the correction for this isn't the best end-state, its an appropriate way to show how to correct the nested-conditionals-with-error-returns anti-pattern.
If an error happens on L2, we will still run writes L3-L5. Why is this bad? Because in the future, we might come in and add logic after the writes complete. We have to make sure to do a check against the error or we will run this logic even if the write did not work. This is an incredibly easy trap to fall into.
I'd go as far as to call that an anti-pattern - you are hiding the control flow in non-obvious ways.
If err is of type int with 0 == noError, this will require some new syntax if you want to return the correct error code (which you should want to do). I would suggest introducing
x ||= y <===> x = x || y
In English: if x (still) equals its default, evaluate y and assign the result to x"
Advantage: that's how the shell works, too. Disadvantage is that 'logical or' is not what one associates with "run items until first failure", but that can be learned.
An alternative could be to have a language construct that takes a sequence of lambda's, executes them in sequence until the first one that fails (if any) and returns the index and the result code of the failing lambda. That would get you something like:
do := func(n int, err error) {
if err != nil {
panic(err)
}
}
do(string.Write(/* 1 */))
do(string.Write(/* 2 */))
// ...
do(string.Write(/* n */))
In fact, the pattern I've found is to declare throw-away lambdas [2] that I then call (few lines later). I haven't looked at all the pros/cons of doing this, but so far I find it's a very versatile way of doing.
> An alternative could be to have a language construct that takes a sequence of lambda's, executes them in sequence until the first one that fails (if any) and returns the index and the result code of the failing lambda.
You could write a function in Go as it is that takes a sequence of lambdas and does that, so why would you need a language construct?
You can panic inside Write() and recover at the top of the DumpBinary() function. This has been demonstrated by Rob Pike and Andrew Gerrand in a talk at Google IO.
As long as you don't leak panics outside of your package, it's OK to use them for non-local error returns.
Yes, but that would work best if the culture was to panic to signal errors. If it is not (as IIRC in go), you need to wrap common library calls to do so.
The only problem I see is that binWriter is poorly named: it should be called unreliableBinaryWriter (alternatively, and possibly better, its Write method should be called UnreliablyWrite.)
The purpose of the type and its method is to provide a mechanism to (1) abstract different writing methods for different data types, and (2) silently swallows but records errors to provide a try-but-don't-care-if-I-fail write where you can check for errors later.
Its not a problem that adding logic after those writes that assumes that they were reliable produces incorrect behavior, though it is a problem that the code where the type and its method are used doesn't clearly reflect the intentionally unreliable nature of the operations.
Absolutely, I'm going through some of my code now where I had thought of the first improvement as obvious but hadn't considered that second, much nicer solution in Go. It really is like making a custom Maybe type. It might be another pattern to be improved on to have tons of these sorts of custom types running around, though...
I normally see a form of it from people that don't like early returns. But if you try to do that with code that explicitly passes pack errors, you end up having to twist the code to avoid early returns. I guess this form pushes all the returns into a list at the bottom, which makes them more comfortable?
People don't like early returns because sometimes they return before some basic cleanup / must have function component is run. Normally added by someone unfamiliar with the function accidentally. Go's defer statement makes this much less of an issue, you bind your work piece with your defer piece so that early returns are fine.
Historically some people were against early returns, loop flow control, and the like as part of structured programming's overreaction to the widespread unstructured use of goto for flow of control. They wanted every body of code, be it a function or a loop, to have one entry point and one exit point.
Over time programmers have learned that one entry point, multiple exit points, is OK. But I feel that it is a good habit to flag that with a comment in capital letters so that someone skimming the code can't miss it.
> Why does anyone have to tell people not to do this? How does it enter anyone's mind as a thing to do in the first place?
Its pretty much the natural naive composition of conditionals. I think eventually most people come to the point where the rightward-marching blocks become irritating and they look for ways to avoid them, and in the case of conditionals where one branch ends in a return there is an easy way to do that, but that particular case (and, thus, the solution) may not be as familiar to people coming from languages where error returns aren't idiomatic (though you do run into it elsewhere, just not as often, so its less likely to result in deep nesting if you don't pay special attention to it.)
People end up doing this because code gets written incrementally and often we start out wrong (e.g. here with the wrong/inverted condition). Rewriting large code blocks for a little more clarity is often a PITA.
It would help very much if editors supported this better (e.g. single keystroke inversion of an "if"). Go with its easy syntax would be a particularly good target for automatic rewrites for code like the above, or even just editor-level hints for better style.
Because the alternative is worse. If you test-and-return after every call, your function has multiple exit points and is far less maintainable. If you nest like this, you at least have a chance of maintaining a single exit point in your function (even though this example fails to do so).
This is why the Lord invented exceptions, which it seems that Go does not use. This one example is enough to convince me to never use Go for anything. What a huge step backwards.
I encounter this argument daily and I believe it usually comes from programming dogma of people who read very assertive quotes from Dijkstra.
Most of the time when error checking, returns (or in C, gotos) are fine and lead to more readable code that's easy to make sense of and easy to step through with a debugger.
Please don't fall into the trap of believing that I am
terribly dogmatic about [the go to statement]. I have the
uncomfortable feeling that others are making a religion
out of it, as if the conceptual problems of programming
could be solved by a simple trick, by a simple form of
coding discipline!
- Dijkstra (1973) in personal communication to Donald Knuth , quoted in Knuth's "Structured Programming with go to Statements"
>How does it enter anyone's mind as a thing to do in the first place?
Because a lot of people make up code as they go a long.
Think of it as story telling vs. ordering. When story telling something happens and the your characters/code react to the outcome of that event, but stays in the same context.
Instead it would be better to write code as your being ordered around by an old drill sergeant. He'll always tell you to do one thing and one thing only. When you've completed that task, you given the next instruction and are no longer in the context of the previous action.
It's something I thought about often when reading code, both that of others and my own and it's the best explanation I've been able to come up with.
Short circuit returns are the devil - they make it much harder to factor out part of a function into a smaller function. A function should have one entry point and one exit point; that's the whole point of structured programming. If you're going to return from some random point in the middle of your function you might as well be using goto.
(Of course, good programming languages provide a better solution than pyramid-of-doom nesting)
Naïve, absolutist positions in areas of long-standing debate between programmers of great experience and the highest imaginable competence just makes you look ridiculous.
Naive, absolutist positions in areas of long-standing consensus between programmers of great experience and the highest imaginable competence makes one look even more ridiculous.
By and large, the best programmers eschew nesting in favor of early returns. Invariably (in my experience) those who argue against early returns are inferior programmers (and not only by virtue of lacking taste in this particular debate).
Where did you get that idea of consensus? A lot of languages do not even have a return statement, neither does lambda calculus. Furthermore, CS community has long abandoned statement based languages in favor of expressions and relations which do not feature "return" for onvious reasons in forms other than equalent to jump.
I'm talking about programmers, not computer scientists. That is, people who actually accomplish things in the real world by hacking on software, rather than pontificating about it from their monadic ivory towers :) The latter have "abandoned statement-based languages," but the former absolutely have not.
I don't see that distinction. The are languages designed to map directly to register machines internal operation, like C or C++ or fortran and those are statement based because that's how the machine works. There are also high level languages, designed to express computation, and those are expression based, again, beacause computation is inherently based on expessions. There are other kinds of languages as well, for other purposes (relations, queries, etc).
Do you really belive that only people working on low level register transfer level things "accomplish things"? That's souds like a very old assembler argument.:-)
> By and large, the best programmers eschew nesting in favor of early returns.
Yup.
> Invariably (in my experience) those who argue against early returns are inferior programmers (and not only by virtue of lacking taste in this particular debate).
> They generally make the function smaller, that's what guard clauses are for.
My point is: frequently, one wants to extract a part from the middle of a function to make a new function. This is very easy (and can usually be done automatically) unless said part contains a return.
Suffering the unbearable burden of a single exit point just so your refactoring tool has an easier time breaking up the code isn't a trade worth making, especially when it's the single exit point that's probably making it too long to begin with.
> A function should have one entry point and one exit point; that's the whole point of structured programming.
I think a reasonable primary source for "the whole point of structured programming" is Dijkstra's "Go To Statement Considered Harmful"[1]. It's a short article and once you get past his writing style, his point is simple and lucid.
In the first part he lays out a simple question: given a program text, how much information do you need to track of to correctly identify where the program is currently executing at some point in time and how it got there? Roughly, if you were paused in the debugger, how much state does the debugger have to hold in order to be able to resume where it left off?
He's asking this because the smaller amount of state required for this, the easier it is for a human to look at a program text and figure out what can happen while it's running dynamically.
If all your language had was assignment statements, it's simple: you basically just need a single line number. Adding "if" and "switch" for branching doesn't add any more complexity. And, of course, reading code like this is pretty trivial.
If you have procedure calls (and by implication, recursion), you need a stack of those numbers, which is exactly what the callstack in your debugger holds.
When you add "while" and "for" for looping, you need to keep track of how times you've gone around the loop.
Now, if you add "go to" everything goes to hell. If you're on line 10, did you get there because you were on line 9 before, or because you jumped to it, or some random combination of those? It's a total mess.
Then he says:
> I do not claim that the clauses mentioned are exhaustive in the sense that they will satisfy all needs, but whatever clauses are suggested (e.g. abortion clauses) they should satisfy the requirement that a programmer independent coordinate system can be maintained to describe the process in a helpful and manageable way.
Here he's saying you can add any other "clauses" (flow control constructs) to a language that you want as long as you don't add to the amount of state you need to store to keep track of how you got there.
For example, adding an "unless" statement that works like "if" but only has an "else" clause instead of a "then" clause is peachy. You don't need any additional data to keep track of where you are.
You know what else doesn't require adding any additional data? Early returns.
So as far as Dijkstra is concerned, no, avoiding early returns is not at all the point of structured programming.
Straight exceptions are also goto-like, but there's the slight improvement that they allow you to (automatically) refactor out the middle of a function without changing the behaviour.
The solution I prefer is a monad with some light notation (Haskell do, scala for/yield). So it looks like:
The <- syntax clues you in that this computation is going on in some kind of "context" - if you're working with futures it might happen on a different thread, if you're working with collections you're iterating over the elements. Here we're working with a validation, and our computations only happen if the previous one succeeds.
Is this goto-like? You could argue so - once one computation fails, the rest turn into noops and we pass immediately to the end. But to my mind this is like replacing an if/else with polymorphism - our validation context is an object with certain behaviours, and we're calling methods on it with our functions as parameters, which will behave differently depending on which subtype our context instance is.
I don't see how the <- clues you in any better than the error handling that things might exit early. It seems from your response that the goto-ness of the go solution is not what you're really objecting to, because your proposal is equally goto-y. As far as I can tell, it's really the if-else verbosity that you don't like, which is fine (and I agree, exceptions are better), but it has nothing to do with the "whole point of structured programming." And, unless I'm mistaken, structured programming also lacked exceptions in its initial formulation.
I think the main thing I'm objecting to is the possibility of multiple paths through the function. Doing it this way there's only one possible flow: a series of calls that pass blocks to validation objects (that then may or may not execute them).
Compare how smalltalk didn't have a conditional control flow statement. Rather, the boolean type has a polymorphic method that takes a block and then executes it or not.
The other nice thing about this approach is it makes the language simpler and more regular because it's implemented using standard language constructs rather than a special statement. E.g. you can write a "gather" function that takes a collection of monads and returns a monad of the collection, and this is generic in the monad (so the same function works to turn a List of Futures into a Future of a List, a List of Validations into a Validation of a List, etc.)
No, no, no. That's what college professors with no real world experience tell you. In the real world, professionals use guard clauses to exit early all the time.
1. The file I/O makes the case for including exceptions in the language.
Specifically, adding one-off types to deal with exceptions is a bug, not a feature.
There is a good case against exceptions but that ain't it.
2. On slide 5, it appears to show that you have to use a switch statement on a generic
to get polymorphism because the language doesn't support overloading. Again, looks more
like a bug than a feature.
Also, is the "break;" implicit in Go? At first glance, it looks like a coding error.
Presumably these are tips for coders, not for language developers. As such, the language has neither generics nor exceptions and the programmer has to deal with that.
W/r/t #2 - you're not familiar with Go but knew exactly what was going on. That's totally a feature. The language was designed around exactly that kind of reading.
Sure, I spend a decade in C. It's not hard to read.
My only problem with using generics in this context is that you can't catch type-conversion errors at compile time.
Seems like a step backwards with only downside. I get why exceptions are a double-edge sword. I'm not clear on
why undermining compile time type safety is an feature.
> I'm not clear on why undermining compile time type safety is an feature.
I think this is what people will ultimately focus on when considering Go. Many of the complaints are a product of type weaknesses in the language, voiced by people who had assumed that a modern static language wouldn't have that fault. Others tend not to mind because they lack that expectation, and regard the dynamic behaviour you can get as a feature. The argument about shared mutable state goes the same way, but for some concurrent but non-parallel code it might be convenient.
I can easily see people picking Go when moving from Python. But not when moving from a static language with a stronger, safer type system.
Also, as burntsushi points out, it does require more sophistication in the type system. I doubt they're trying to sell a naively simplistic type system (sophistication often makes it easier to use), but when Go was announced the feature they seemed to be selling the hardest was short compilation times. I think that feature is the seed of this behaviour.
> I'm not clear on why undermining compile time type safety is an feature.
More safety usually requires a more complex type system. Such a type system is usually more expressive and can make more guarantees about your program, which is a pro. But of course, it is also more complex, which is a con.
Totally agreed about #1 and exceptions. In that defintion of DumpBinary, each bit of error handling takes three extra lines. Speaking of "cognitive load". Whereas here's what it'd probably look like in Python, with exception handling (assuming binary.write() raised IOError on error, which they should).
class Gopher:
def dump_binary(self, writer):
"""Write this Gopher to given writer, raise IOError on error."""
binary.write(writer, binary.LittleEndian, len(self.name))
writer.write(self.name)
binary.write(writer, binary.LittleEndian, self.age)
binary.write(writer, binary.LittleEndian, self.fur_color)
I can see the argument for Go-style explicit error handling, but having this as the first best-practice example just doesn't sit right or sell it very well.
Edit: Okay, I guess I should have read the next slide before commenting. Still, the "one-off utility type" is longer and more complex than the original error handling (so I wouldn't do it unless you're using it elsewhere as well).
> The file I/O makes the case for including exceptions in the language.
Exceptions are included in the language, they are called panics, and the go convention is that libraries don't expose them in the public interface, but can use them internally (and, of course, application code can use them.)
> Specifically, adding one-off types to deal with exceptions is a bug, not a feature.
One-off types aren't used for error handling in the example, they are used for abstracting a writing pattern that works like binary.Write for writing non-string values and like io.Writer#write for string values. Sure, the special type's write method also swallows errors, but, except that the mechanism by which it swallows errors would look different, the use of the one-off type and its write method would be pretty much the same with exceptions/panics as with error returns from the underlying library functions.
My point is that an example of a 'best practice' shouldn't make the reader think:
Oh, that's a kludge to get around a design decision in the language.
In his example, he uses a one-off type to isolate the caller from having to explicitly
check if each individual write failed. I got no problem with that.
But it seems like it a work-a-round.
It's just an odd choice for an example. The take-a-way seems to be that the basic class
libraries need a wrapper that makes them easier to work with and that, you the developer,
should build these wrappers so you know exactly what the policy is.
> My point is that an example of a 'best practice' shouldn't make the reader think: Oh, that's a kludge to get around a design decision in the language.
I would think that some of the most important best practices would relate to the best means of dealing with situations where the approach users coming from other languages might naturally seek to apply are not the most appropriate, either because the other-language feature they are likely to have used does not exists (or works differently) or because of features in the target language that allow a better approach than in other languages.
> In his example, he uses a one-off type to isolate the caller from having to explicitly check if each individual write failed. I got no problem with that.
You'd do that if the library function threw exceptions, too. Eliminating repeated try/catch blocks (or calls to inline functions with deferred recover calls in go) and eliminating repeated if/then blocks are pretty much the same thing.
The substantive difference between Go and, e.g., Java with regard to exceptions is that Go builtin and standard library functions panic in a much narrower range of circumstances than Java's standard library. Go seems to prefer that the decision that an error condition is treated as a panic is generally left to user code that is written with more awareness of what is exceptional in the context of the role of that code than standard library code has.
So, you could write the code in pretty much exactly the way you propose in Go; you'd just need to write a wrapper function around binary.Write that panics on errors.
Go seems to prefer that the decision that an error condition is treated as a panic is generally left to user code that is written with more awareness of what is exceptional in the context of the role of that code than standard library code has.
That makes little sense in the context of ioWriters and, frankly, most contexts.
How often do you write code where you deliberately want to be oblivious of IO errors?
> How often do you write code where you deliberately want to be oblivious of IO errors?
The existence of error returns means that the IO library function "not panicking" and the calling code being "oblivious of IO errors" are not equivalent.
I think the motivation for the Go convention of keeping panics internal and reducing to error returns in library APIs minimizing the potential downsides of the way unchecked exceptions are not part of the declared interface of functions and yet have a major effect on control flow. A convention of using panics (which amount to unchecked exceptions) only within logically bounded units is, IMO, a sensible approach to this.
(You could do this with checked exceptions, which require additional syntax in declarations. When you already have support multiple valued returns, I don't see that checked exceptions get you much that's worth making signatures more complicated.)
When you already have support multiple valued returns, I don't see that checked exceptions get you much that's worth making signatures more complicated.
Seriously?
Does your test-suite exercise every potential I/O error and timeout on every single of your I/O calls?
Technically, the language does support exceptions. That said, they're in the "please never use this, ever."
/pedant hat off
The spirit of your comment is right, however -- the wonky code resulting from error handling, just like the "compile error on unused vars or imports," is something most new Go users find jarring.
> Technically, the language does support exceptions. That said, they're in the "please never use this, ever."
No, its not. The convention is that any use of panics within libraries should be internal, and that libraries' exposed interfaces should use error returns. [1] Use of panics internal to libraries, or use of panics within application code that is not creating a library for others to consume, is not discouraged.
It's just a different (arguably better default) to have to explicitly ignore errors and/or explicitly bubble them up the call chain.
You'll always be in control of your code's control flow that way. You'll never have some random library 5 levels beneath your code throw an exception that you didn't know about, causing your function to return prematurely, resulting in your function accidentally leaving some file handle open, a mutex locked or some similar problem (I realize in Go this should be handled via defer anyway, so would probably not be an issue in practice). In short, you know exactly what error cases you should be thinking about and are forced to explicitly reason about whether or not you care about it.
A few languages fix some of these concerns with checked exceptions, but checked exceptions have their own limitations and drawbacks. Of course, regardless of the approach taken, lazy programmers will always do the minimal amount of effort required to ignore errors/exceptions.
I have general issues exceptions. I've seen too many developers use exceptions in place of conditionals, and in the worst abuses, use exceptions as some hacked form of GOTO that lets them jump to different points of execution within their call stack.
They're just a little too easy to abuse and are often used for non-exceptional cases. So while checked exceptions improve on exceptions, they are still exceptions at the end of the day.
Though honestly I think a better solution is monads.
//returns Validation - either success, or the first error (which stops processing)
//return values are directly accessible, because later code won't run unless earlier code succeeds
for {
_ <- binary.Write(w, binary.LittleEndian, int32(len(g.Name)))
_ <- w.Write([]byte(g.Name))
_ <- binary.Write(w, binary.LittleEndian, g.Age)
_ <- binary.Write(w, binary.LittleEndian, g.FurColor)
} yield {}
//Runs all the operations, ignoring errors
//type system will force you to check a return value before you can use it
binary.Write(w, binary.LittleEndian, int32(len(g.Name)))
w.Write([]byte(g.Name))
binary.Write(w, binary.LittleEndian, g.Age)
binary.Write(w, binary.LittleEndian, g.FurColor)
They do something interesting in the 'template' lib, they have a func called Must which will eat the error and throw a panic if you don't want to handle each error yourself, I like it because it's much more explicit than the func just throwing the exception without you knowing.
Something that doesn't sit right with me is the use of a "channel of bool" when the receiving goroutine doesn't actually care whether true or false is sent. It muddies the API to force the sender to choose one of two values when all that's really wanted is an amorphous signal.
A channel may be closed with the built-in function close; the multi-valued
assignment form of the receive operator tests whether a channel has been closed. [1]
Given that is available, why is the use of a separate bool quit channel preferred?
A quit channel (that is not receive only) can be used to send back a quit message by a goroutine that only has a receive-only view of the main data channel; close() can't be called on a receive-only channel.
Cool, I like that. In some ways, it would be nice if there were a friendly alias for struct{} as part of the language, but I suppose it's hard to come up with a good general name for that.
This is why I love pattern matching in languages that support it - typing is just one of the things on which they can switch. As the Scala website puts it (paraphrasing), "Pattern matching is switch on steroids!"
Here is how you would return all leaf values in a binary tree in Scala:
case class Node
case class Fork(left: Node, right: Node) extends Node
case class Leaf(value: Int) extends Node
def getValues(node: Node): List[Int] = node match {
case f: Fork => getValues(f.left) ++ getValues(f.right)
case l: Leaf => List(l.value)
}
EDIT: Would actually be more idiomatic in Scala to extract the values from the case classes rather than just matching on type...yet another wonderful feature of pattern matching. Coded as follows:
def getValues(node: Node): List[Int] = node match {
case Fork(left, right) => getValues(left) ++ getValues(right)
case Leaf(value) => List(value)
}
Here's the tail-call optimized version, won't blow the stack. Technically idiomatic, but way too cryptic to be serious code. Clearly I need more outlets for writing Scala. =)
@tailrec
def getValuesFromNode(node: Node, accValues: List.empty[Int], nodeList: List.empty[Node]): List[Int] = node match {
case Fork(left, right) => getValuesFromList(accValues, nodeList ++ List(left, right))
case Leaf(value) => getValuesFromList(accValues ++ List(value), nodeList)
}
@tailrec
def getValuesFromList(accValues: List.empty[Int], nodeList: List.empty[Node]): List[Int] = nodeList match {
case x :: xs => getValuesFromNode(x, accValues, xs)
case _ => accValues
}
Of all the code in there this part confused me. What exactly is being switched on? It looks like v is being reassigned to the type of v, then the type of v is written out (instead of the value).
It's being switched on the type of v (string or other, in this case), though in a more complex case you could easily have several different types. The assignment basically redefines v to be the matched type inside the case statement. You could easily just add a "sv := v.(string)" as the first statement of "case string", then use sv in place of v within that block, but this does read much cleaner.
I think it gets more interesting when using several, often complex (struct) types in the same switch statement.
> It looks like v is being reassigned to the type of v,
v is not being reassigned, it is being declared (note := rather than =). But its a little complicated, because type switches are a special syntax construct that is similar to, but different than, regular expression switches. See, "Type switches" in the language reference [1]
"type" is a magic word in Go, and in that example. It's highly idiomatic -- it's inconsistent with the rest of the language (Using "type" instead of an actual type), but it makes sense once you memorize the idiom. Sort of perlish -- there are two different operations that look basically the same, and the correct one is chosen based on context (in this case, the context is "is there a type name, or "type" literally?)
Perhaps it would have been cleaner to use "*" or some other operator symbol instead of the reserved word "type"
> "type" is a magic word in Go, and in that example.
Specifically, type switches are a specific syntax construct that look very similar to normal switches, but switched on something that looks like a type assertion with "type" in place of an actual type.
Meta comment: does anyone know what software is used to generate these slides? I've seen a few slide decks in the same format and they're impossible to use on mobile. I'd like to fix that
"Deploy one-off utility types for simpler code" can be called a monad or Optional. I wonder if the language developers will add more formal support for that; it looks impossible to add Optional as a library due to lack of user-configurable generics.
If it had been done at the very beginning, I think it would have been wonderful...but given that legal, idiomatic nil is already out in the wild I think it's too late. Such a shame - in the example of error handling, returning an Option doesn't appreciably increase verbosity:
_, err := potentiallyErroringOperation()
// with idiomatic nil
if err != nil {...}
// with monadic Option
if err.isDefined() {....}
Go occupies an interesting space. In my mind, I see it as competing simultaneously with C and Python. I suppose that the developers didn't see a place for an Optional type within that realm. I have to admit, I think it's a shame; huge proponent of non-nullability here.
Especially given that with Go's lightweight lambda syntax a functor type would be easy to work with, I'm disappointed with its exclusion.
> Go occupies an interesting space. In my mind, I see it as competing simultaneously with C and Python. I suppose that the developers didn't see a place for an Optional type within that realm.
I would assume that everything that hasn't been implemented in Go 1.1 is not implemented because the devleopers of Go "didn't see a place for" it.
Now, either not seeing it as more important than the theings that did make it in, or seeing it as trickier to implement and acceptable to do without in 1.x, that's quite valid.
You make a valid point, but for something as fundamental as nullability, I think that's baked into the core language spec. It's possible that we could see an Option type in the future, but the fact that it's not an integral part of the language now means it would be unreliable and defeats the purpose of eliminating NPEs.
> You make a valid point, but for something as fundamental as nullability, I think that's baked into the core language spec.
Well, its certainly out for 1.x; I wouldn't presume to assume how much or little flexibility there will be for 2.x if/when it happens.
> It's possible that we could see an Option type in the future, but the fact that it's not an integral part of the language now means it would be unreliable and defeats the purpose of eliminating NPEs.
Assuming that its not part of a breaking change, sure; but the no-breaking-changes pledge only applies to 1.x. If there is a 2.x, it will be because a need is seen for breaking changes.
I think that beyond a handful of core features, keeping Go 1.x small was a key goal, and getting real production usage experience with the small 1.x to decide on future directions.
> Didn't read, because mouse scroll wheel doesn't work.
You don't need the scroll wheel to navigate the presentation. Click on the right side of the slide to go forward, on the left side to go back.
> Honestly, who thinks this stuff is a good idea?
People who are making presentations to deliver in an in-person setting where putting them on the web for everyone is a secondary use, not the primary use? I mean, looking at that, it would be a lot better as the visual component of an in-person presentation than it is on its own (though its not without value on its own.)
Yes all those options work. But why remove a common method of navigation? In addition, the format makes it impossible to search for text within the presentation.
Because it wasn't 'removed', rather it wasn't 'created'. You seem to be missing that supporting this feature is extra work and not 'built in' to the way the project is constructed. The author didn't sit there and say, well, fuck the wheel, I'll just throw in a line of code here to disable it, just to piss people off.
Also completely useless on mobile. It's so frustrating when sites that are basically just text try to get fancy, mess it up, and don't provide a fallback.
Thirteen: don't try to sort, it's going to be painful if you do.
http://golang.org/pkg/sort/ (See example 1 -- you have to write that for every concrete slice type you want to sort; it's not enough to write it once. And god help you if you also want to sort other collection types.)
Wow, that's magnificent, although I don't think Go's reflection capabilities are quite sufficient to fix this particular language bug with decorators. But maybe my imagination just isn't sufficient.
The interface is quite bad. I was able to scroll right to see a total of three slides, with no indication that there were any others. I thought that was the end!
I tried hitting space, and another slide scrolled in! But I wasn't done reading, so I hit shift-space, which is the common idiom to reverse the direction of space. But it scrolled in the same direction. Now I'm two slides behind. After some more thrashing, I found that I could use the arrow keys to navigate. Delete also goes back.
Please don't make me use trial and error to figure out your UIs, Google!
For what it's worth, this is also a very common pattern in ruby, using blocks:
def with_error_handler
if error = yield
puts "error: #{error}"
end
end
def do_things
error = do_this
return "error doing this: #{error}" if error
error = do_that
return "error doing that: #{error}" if error
nil
end
with_error_handler do
do_things
end
Having said that, I did need to read the Go version more than once to grok it. I theoretically like Go's syntax for defining functions that take functions, but in practice I find it quite hard to scan, especially if there are more parameters on top of the function, or the passed function has multiple returns, or (god forbid!) it takes a function itself - it can all become quite a lot of bookkeeping.
yes, of course, but the discussion I replied to was talking about this specific idiom, this application of higher order functions, and not about the use of higher order functions generally.
It's important to compare with the previous slide, to see the problem this is trying to solve. It'd be really easy to do a lot of
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
all over the place, as the error return type isn't part of the standard Handler signature.
The decorator approach lets you return errors from your handlers, and then have your actual error handling centralized; you may want to send an e-mail to the ops team, send it to a third party exception management web service, etc.
Loss of strong typing and runtime type detection is what makes me sad.
Well, Go 1 has some of the problems of Java 1: no generics, typecasts from interface{} here and there, simplistic GC.
Reasons are probably similar: this all is good enough for version 1, and can later be improved upon.
"Well, Go 1 has some of the problems of Java 1: no generics, typecasts from interface{} here and there, simplistic GC. Reasons are probably similar: this all is good enough for version 1, and can later be improved upon."
Except I doubt we're going to see generics in Go version 2, whenever that may be. The sentiment against it has been pretty strong in the golang-nuts mailing list.