Hacker News new | past | comments | ask | show | jobs | submit login
RSpec Best Practices (betterspecs.org)
107 points by andreareginato on Oct 3, 2012 | hide | past | favorite | 24 comments



I can't believe this line is considered good practice:

    collection.should have(2).items
It illustrates what I believe is wrong with RSpec: the assertion style is completely disconnected from the API that is being tested. There's no way you can tell what methods are being called on the `collection` object without being intimately familiar with this specific RSpec matcher.

So what does that line actually do? Apart from calling strange methods like `items` on strange objects like `have(2)` in the end the test passes iff `collection.length` equals 2. Compare RSpec with classic test/unit assertions:

    collection.should have(2).items

    assert_equal 2, collection.length
The second line is just so much more obvious to me.

On the other hand I really like the RSpec way of grouping tests and structuring test cases. My favourite test cases are written with minitest/spec (a lightweight test/unit-compatible clone of RSpec) and test/unit-style assertions.


In fact RSpec team is in the process of changing the default to the new expect matcher syntax (http://myronmars.to/n/dev-blog/2012/06/rspecs-new-expectatio...).


Then write

    collection.length.should eq(2)
But I see nothing wrong with the other once you know how the "have" matcher works. It will even let you write

    it { should have(2).items }
when your subject is set up properly.


Likewise, what most people really care about is the delta. Absolute matches work when the initial state is 0, but it's fun watching these sorts of tests fail when that's not the case.


This would've been good when I was just starting with RSpec. A couple things I'd like to add:

    Mock or not to Mock
Mock and stub relentlessly. Ensure that the object you're testing the behavior of sends the right messages to its collaborators but put the tests for those messages in the unit test for the collaborator. This keeps your tests isolated and fast. As long as you adhere to "Tell, Don't Ask" you should simply be able to assert that your method is sending messages to other objects without relying on the behavior that those methods perform.

    Create only the data you need

    describe "User"
      describe ".top" do
        before { 3.times { Factory(:user) } }
        it { User.top(2).should have(2).item }
      end
    end

The author advocates not creating dozens of objects but why make any at all? This seems like a place where setting expectations is a better course of action. There's no reason to test ActiveRecord query methods like this - just assert that the filters that you want are being set by the method and call it. I usually do something like this (using rspec-mocks):

    subject.should_receive(:popular).and_return(subject)
    subject.should_receive(:by_create_date).and_return(subject)
    ...
This lets you know that the class method is performing the filtering you want without actually fetching data from the db. Much like validations, this is the kind of thing you can rely on either the Rails team or the db adapter writers to get the actual filtering right without having to test this over and over in your app. Having your unit tests (business logic!) actually hit your persistence layer like this is the kind of thing that makes test suites slow!


This keeps your tests isolated and fast.

No, writing good tests keeps your test isolated and fast. Creating lots of mock objects distracts you from writing good tests, because what you're testing is your ability to 1) write mock objects that conform to the ad hoc interfaces you've produced and 2) maintain those mock objects when those ad hoc interfaces change.

If you want to know if your code works, you still have to test if it all works together.


I totally agree - if you want to know if it works, you have to test it all together; however, that's not how I approach unit testing. I don't think mock objects are a good idea during integration testing with perhaps a few exceptions such as an external web service. When you want to ensure that your application works, testing the steps that actual users will go through is the way to know and mocking or stubbing functionality won't help.

Personally, I view unit testing differently. I see it as ensuring that my object gets the right data and sends the right messages. If you start adding tests for things like "did it save to the database" or "did it find 3 results" you're now involving your persistence layer with your testing of business logic. It's a tangential concern.


But if you're having to create lots of mock objects, that's a huge signal that your objects are tightly coupled, and that not only are you writing a bad test, you've written a bad subject-under-test.

Yes, you still have to write integration tests. But the discipline of testing your objects in isolation and outlining the objects they collaborate with is a major benefit of using test doubles, not a downside. If you can't reason about your objects in isolation, how can you expect to reason about them in aggregate?


If you can't reason about your objects in isolation, how can you expect to reason about them in aggregate?

I don't use my objects in isolation.

Maybe I can mock up some factory which creates model objects outside of my persistence mechanism, but what value is that? I care that when I create an object in response to an action from a real user the persistence mechanism produces the correct object.

I happily bulk load testing data into my persistence mechanism for testing purposes, but I see almost no reason to mock it for tests.


  subject.should_receive(:popular).and_return(subject)
Now you're testing that your piece of code is calling the "popular" method, but you have no idea whether or not that method even exists in your implementation. Too much mocking is worse than "no mocking", because "no mocking" is slow but it doesn't tell you lies.


I really don't like the idea of "no controller tests at all" as a best practice.

If you're having to mixin Rack::Test::Methods to write your "full-coverage" integration spec, then you're really writing a controller test. It should go in a controller spec without all the horrid mixin hacks.


I test at controller level that things aren't accessible (security) and at integration level that the appropriate things are accessible and error messages are correct when not (functionality).


Yup, the real best practice is to use your judgement and choose the right test for the job.

If it's something that you can see in a browser (and the results are visible too), then it's a candidate for an integration spec.

You really shouldn't be integration testing stuff like security. For example, testing a user can't submit a form they can't see (which is where horrible Rake::Test::Methods mixin hacks start appearing).

You also shouldn't (imho) be integration testing side effects that aren't visible to the user. For example, if an action ends up in an audit log that the user is never going to see, there's no point firing up an integration test to do that. A light-weight controller test can do that just fine.


I'm not a ruby dev, but that first example seems to fly in the face of the whole semantic DSL that RSpec is attempting to accomplish.


Actually, quite to the contrary, I think it's more semantic. What you probably don't know is that `Foo#bar` is the way of describing an instance method `bar` in class `Foo` and `Foo.bar` is a class method. RSpec's `describe` announces what unit of code you're writing specs for and `context` (although functionally equivalent to `describe`) is used to enumerate the different contexts in which you're going to test a method.

A typical spec looks like this:

    describe Foo do
      describe "#bar" do
        # specs for Foo#bar ...
      end

      describe ".baz" do
        # specs for Foo.baz ...
      end
    end
The reason I say that using the method names for `describe` is more semantic is because it says exactly what you're describing. Also, as a side note, RSpec is smart enough to realize that "#bar" or ".baz" are referring to methods. In the above example, a spec for `Foo#bar` would be described like this: "Foo#bar should return 42". That's the description that is printed out when the spec fails or if you use a long-form output formatter.


RSpec isn't aiming to be used as plain English; that's more of a Cucumber thing. I appreciate RSpec for that.


Personally I don't like the first example. I also don't believe it resembles cucumber at all.

It's certainly more closer to a documentation string (the ones you see in python or common lisp).

In any case given the following:

    describe 'if the user is an admin' do
versus

    describe '#admin?' do
Ignoring the fact that the author is ignoring BDD here, I certainly like the first one more, even if the user part is a bit redundant.

Nevertheless, what the author has done is commendable but there are few cases there that I just can't agree with. I don't see anything really wrong about them, it's just that in real life things can get complicated.


To be honest, I think both styles are good practice, but it totally depends on the context. For example, if I want to describe how the "#admin?" method is working, I'll use

    describe "#admin?" do ...
but if I'm describing a different method, that does different things depending on whether the user is an admin or not, I'll do the following:

    describe "#some_other_method" do
      describe "if the user is an admin" do
        ...
So you can mix and match both styles, depending on what exactly you're describing (and it's context).

If you're looking at the OP more closely, it even says this is only recommended for "describing methods".


As an experience Ruby dev, these are pretty much all standard practices in real projects. I don't like shared examples, wince at the thought of that much complexity being hidden away in a test!


Could you elaborate on what you mean by "shared examples"?


He's referring to this: https://www.relishapp.com/rspec/rspec-core/v/2-0/docs/exampl...

I keep shared examples to a minimum, but I do find them useful in some cases, such as when I have to test the same few attributes over and over in different contexts. But I define the shared example in the same file that uses them, and never have project-wide shared example groups.


I am (I'm a beginner), and I agree. I'm actually reading The RSpec Book right now, and though I'm not far into the RSpec docs, it seemed to suggest using this sort of documentation.


I know it may look strange in isolation bit I found it worked well to help ensure I had unit tests across each of my methods as I would list each method using the suggested style here. Also i find where I want more narrative context eg more nicely worded lines I would make describe or context which wrap around a specific method test or group of methods being tested.

Once you have lots of tests, then this style approach just takes out more of the thinking up what to write mental effort and also provides a nice clean consistent style across your whole project.


I typically go one step further in the "mocking http requests" and write a simple class that does nothing but make the http call and translate input/output. From there you can stub/mock that object as appropriate. That way you have a very simple interface to mock/stub against and don't have to worry about things like webmock. Well, at least until you start testing your http service interaction classes, but at least those are simpler and have relatively clearly defined uses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: