Hi HN. If you get some value out of these slides, great! But please, do me a favor: don't try too hard to construct a narrative from a stack of slides which:
a) represent about 15% of the actual semantic content of the presentation; and
b) are shaped by the consideration that there's a limit to how much code you can put on a slide and have it still be readable.
Thank you for the note. There is a large difference between slides used in a presentation and slides intended to be read like a book. With the former, it is pretty useless to have the slides out of context. With the latter, there is no need for the author to speak. Fortunately, someone posted a video to the presentation, a better link than the initial submission's (http://confreaks.net/videos/614-cascadiaruby2011-confident-c...).
Footnote to (a): except for the slides which are intended to be unreadable because they are about the shape of the code rather than the content. An effect that works very well across a room but which is rather lost when viewing the slides on the web and without narration.
There have been a few online slide type presentations on HN recently and I usually find them frustrating because even when the subject is interesting (which is the case most of the time), the slides make you feel like there is so much information missing.
Although you say that there is a lot of content missing, I thought your slides conveyed a lot of useful information and actually made me feel like I got what your presentation was about. Well done!
This is great, but I'm a little worried about the custom Maybe function that slipped in about halfway through. Unless it's part of a standard framework (like ActiveSupport), if you use it in your projects you're likely going to find that:
a) Integrating with other codebases that don't use Maybe will result in code with an inconsistent style, and
b) If Maybe (or something like it) catches on enough, you'll hit collisions where two projects or libraries have different ideas of what Maybe means. (Maybe one uses NilClass, the other the custom NullClass, for example.)
(The PHP community has taken this to the extreme, with developers getting burnt over and over again with differing "standard" components using the same set of names.)
It indicates that what the author may want is a more expressive type system, one that can do some static verification(e.g. Scala's Option, Haskell's Maybe).
Another way to get the behavior of the described Maybe method is to enforce a coding style where functions require a default return value to be passed in - this lets it be handled in a precise manner and surfaces most problems quickly, without butchering fundamental assumptions of the language.
The real difficulty comes in when you want to enforce a branch for functions that are "rarely null" instead of having them blow up much later when you inevitably forget to handle the null case. A lot of the time the only immediate way to avoid this class of error is to use algorithms that will never stumble over a null(which tends to make them a lot slower) - if you have macros or richer type systems, other things are possible of course.
I think a good solution (for Ruby) would to add something like that to a future Ruby version. These things do conflict with Ruby's core semantics a bit, but they do open the door to some intriguing styles -- as does this: https://github.com/intridea/hashie
I proposed a not-dissimilar idea on ruby-core recently. It got a little support, but was mostly ignored. It can be remarkably difficult to get meaningful changes adopted.
If ruby core shrinks and more of the standard libraries move to gems, it will be easier to just write this sort of thing as a gem and let the best ideas win out. The state_machine gem is a great example of the slow (but very solid) evolution of acts_as_state_machine into a highly useful general ruby library.
It seems to me that the essence of this idea is that you attack a problem in stages, and that each stage should have completely prepared you for the next.
That's where the confidence comes in; you can move forward knowing that your assumptions will hereafter always apply.
Upvote for NullObject pattern -- by far my favorite pattern. Unlike most 'Design Patterns', this is a useful and non-obvious idea no matter what language you work with.
Well, hardly regardless of language--using the Maybe monad is more elegant in Haskell, for example. The beauty there is that your functions don't need to ever worry about getting null input (pronounced Nothing in Haskell), and you can trivially compose functions that possibly return Nothing with functions that don't.
Also, I'm not really sure how a pattern like this would work in Java, although this could just be because my Java is (thankfully) going rusty.
Good point -- I agree that pattern matching and monads are a more general solution to the Null Object problem.
As for Java -- http://www.cs.oberlin.edu/~jwalker/nullObjPattern/ has examples. But the case of the empty linked list is stretching it a bit. Somehow it seems to me you ought to be able to have a regular linked list where the "nothing" behaviour did not need a whole other class.
I usually explain Null Object like this:
Think of a website where you have users who might be logged in or not. A naive way to express this would be that, if the user is not logged in, your getUser() method returns null.
But then you have to keep testing for null, over and over again, before you can do anything with the user. And the user-is-null case is suspiciously similar to user-is-not-authorized-to-do-this-thing.
Solution: make a hierarchy where there's an abstract User class, concretized by NonLoggedInUser and LoggedInUser. The NonLoggedInUser returns false for all isAuthorizedTo().
Now you always have a User, it's just that sometimes that User does "nothing". So you just ask if user.isAuthorizedTo('doTheThing').
I disagree about the point this presentation starts from. It's a bad code that's refactored in a way that I'm not sure is more clear, but definitely makes it longer... Why not split the work to make the process clearer instead of trying to flatten what's already there? Some concerns I'd have:
Making sure message is an array even if it's X instead. Why do that at all? Why is there an "else" in the first place? It's not like you can support every possible type, so why not say - either it's an array, or you pass a wrong type?
Why is the code for obtaining message in that function? Why is a similar code for destination there? Why is EPIPE being treated the same as normal response from the process?
I've seen a number of really beautiful presentations like this recently. The page source indicates that it was generated from Org-mode. How much tweaking does it get to create a presentation like this, and is there any way I could see the source?
It uses the S5 slide tool. I'm not an org-mode user, for pretty much all my presentations anymore. I hacked up some m4 macros that make writing slides really simple.
I like the attitude, but can't help thinking that using a decent modern statically typed language prevents half of the problems that this article poses to solve to begin with.
That's funny. As I was reading the slides, I was thinking about just how unconfident (in the sense of the presentation) most Java appears to me. I'm thinking specifically of the messily nested try/catch/finally blocks needed when creating a JDBC connection (finally cleaned up a little in 1.7). Or perhaps the abundance of stuff like
if (foo != null and foo.getSomething()) {
or even weird little conventions that you get used to, like
if ("".equals(aString))
I think with some effort, you can avoid some of this sort of thing, but it's not always easy.
(Or, perhaps you didn't intend to include Java as a member of the set "decent statically typed languages?" ;))
Java isn't the best example, no :-) For example, its checked exceptions basically force dealing with them all over the code (if you want a nice API without seventy throws clauses, that is). The equals example you're giving is horrible indeed.
My personal favourite, which is C# at the moment, also has the "null" misdesign so you're right about that. Still, none of the duck typing issues exist (other than the null checking, which admittedly does undermine my point). Also, the IDE helps you do it right, which gives you more confidence as you write the code.
This really depends on what you mean by "modern statically typed language". If you mean something like Haskell, I agree with you but there are too many other differences to make Haskell and Ruby comparable. If you mean Java, then I'm not sure it prevents that many problems, but it does make developers less productive. If you mean something else (C#? Scala? F#? C++?) then I don't have enough experience to comment.
I tried to code like this already, but one scenario I've found that is ruining this style is external API calls.
I'm now finding my code heading back to the old littered style whenever I have to update or get information from someone else's API just because I have to handle potential failure there and then.
Very frustrating, anyone got any insights on how they're dealing with it?
I almost invariably isolate any API calls behind some kind of wrapper facade. In fact, a lot of the presentation is about exactly that: adapting Ruby's IO primitives--which can raise various exceptions, and may return a process exit status in the form of a global variable--in code that isolates their idiosyncrasies.
I think the point is that Array(h) destroys the Hash, whereas [h].flatten will leave it intact, adding another unexpected gotcha to Array().
Array's terrible friend Integer() implicitly crashed my app in production before, so found that interesting to know; not necessarily related to your talk though. :)
Recently I have been thinking a lot about Dependency Injection. It has clear benefits - like in testing - but there is also something that resonated with my preferences on a different level. It has been puzzling me for some time until I identified it - DI let's you code with less digressions: http://perlalchemy.blogspot.com/2011/10/concentration-and-fl...
Browsing 69 slides was painful in Mobile Safari, but the big picture message + simple example is good: encapsulate code to improve readability (it's not just for classes in OOP).
The modeling of:
writing prose <-> writing code
reminds me of:
micro air vehicles flying <-> birds flying.
I've been thinking about exactly this problem. I'm now re-organizing my unit and integration tests into some coherent order, which I hope will help refactor to a better code base.
Any thoughts out there on how to organizing tests to support "confident code"?
I really like the basic idea, but there do seem to be a few potential drawbacks.
(1) Performance. Coercing values that might have been the target type to start with is wasteful. Extracting secondary paths into separate functions is wasteful.
(2) Readability of non-primary paths. Extracting non-primary paths into separate functions might make them harder to follow. Error paths matter. They often demand greater diligence than the happy path, so sacrificing them for the sake of the happy path would be bad. The increased length of the code bothers me a lot less than that. Namespace pollution could also be a concern in larger projects. If the code is reused, great, but if it's single-use then this refactoring would be bad for maintainability.
(3) Complex recovery and variable scope. The cow example is fine, but a lot of real code has to deal with much more complex error conditions. This is not just a matter of what can fit on a slide; handling more complex errors introduces fundamentally different issues. Often you need to know how far you got in order to unwind properly, and that information is likely to be contained in local variables. In some cases this means you should nest functions more, but this also destroys the "narrative" structure plus look out for points 1 and 2. In other cases it's just not even feasible, so splitting the error path into a newly-created function means having that function parse what really should remain local variables.
The idea of preserving narrative flow is good. The conceptual framework of input, action, output and error is immensely useful. Given these caveats, though, identifying one style as "bold" and one as "timid" is just an immature appeal to emotion. It's not timid to write efficient code, or code that respects separation of concerns. It's not bold to write code that preserves its own simple structure at the expense of everything around it. This could be framed as selfish vs. cooperative code, making the opposite emotional appeal, and be just as valid. The goal is to write clear code, and this is just one way that applies in some circumstances and not others
Actually, that's what happens when your platform doesn't support such fundamental utilities as cowsay. I can't imagine how you get anything done... (On a loosely related note, I actually had a somewhat reasonable use case for cowsay at a hackathon recently, so it isn't completely useless.)
That's not a Ruby issue, it's a cowsay issue. Unix base supports it, Windows doesn't. Ruby actually has a great deployment system. And the author's general point, despite him being an obvious Ruby programmer, is that efficiency and confidence are more important than scrambling to cover your bases, and what the latter means in coding behavior. The examples would differ, but his point is universal.
Mea culpa. I did not realize that it was an interface to cowsay, instead of an implementation of it.
There is something to say to say about using a platform-dependent utility for a talk of an eminently portable language, but i do not think that is necessary here.
a) represent about 15% of the actual semantic content of the presentation; and
b) are shaped by the consideration that there's a limit to how much code you can put on a slide and have it still be readable.
A lot of this material has been covered in more detail on my blog (http://avdi.org/devblog) For instance, I wrote a whole article about Maybe, NullObject, and the limits of the ability to make objects falsy here: http://avdi.org/devblog/2011/05/30/null-objects-and-falsines...