Hacker News new | past | comments | ask | show | jobs | submit login
Why testers? (joelonsoftware.com)
39 points by johns on Jan 27, 2010 | hide | past | favorite | 29 comments



Eh, it seems fairly unobjectionable if a bit overly simplistic to me. I did testing for about 7 years prior to dev, and my take is that it a) depends on the size of the team and b) a fully fleshed out QA team needs a variety of skills and fills several roles:

On the one hand, you need 1) grinders who can do the sort of boring, script driven sorts of regression testing that are essential but difficult to automate.[1] You also need 2) people with a good eye for usability and with a good overview of the system who can provide the sort of feedback Joel mentions. A certain amount of creativity and exploratory skills helps with ad hoc testing and devising test cases. "I wonder what happens if I push this button and then that one."

Then, 3) someone who can read code and who understand lower level (OS/database) stuff can come up with some interesting test cases that a black box tester may miss. As in, "what happens to temp files/tables that are created by the app? are they ever deleted?" And finally, 4) if you can find them, you need someone who can program in order to build test tools and automated frameworks (that's mostly what I did). The goal of the latter being improving the efficiency of the entire team. Finding all of these wrapped in a single person is almost impossible, as #1 tends to be incompatible with the rest.

[1] Obviously, if you can automate these in a way that's cost effective, that's preferable, but that's not the typical case.


The dog analogy is a bit off. Programmers are not like dogs. We can remember mistakes we made a year ago and still learn from them. You want a short feedback cycle, not because it's the only way your programmers will learn anything, but because you can only improve so much each cycle. The only way to improve the necessary amount is to go through a lot of cycles, preferably in the shortest possible time.

Feedback from testers is great, but I've found a bigger reason why I love our QA team. For whatever reason, it's hard to implement something and test it at the same time. It's just hard to make my brain think in both modes at once. It's also hard to test something that I've implemented, because my brain is biased by the thought processes it used during implementation. Trying to think in both modes (implementation and testing) means I'll probably end up doing neither effectively. If you have a great QA team, you don't have to think in both modes. It's great.

That hints at his point about letting incompetent programmers do QA. I completely agree with him. A tester thinks differently than a programmer. Hire great testers. So many companies make that mistake.

I'm glad to see Joel take a public stand on automated tests, too. Automated tests have their place and their value, but his criticisms of them are completely valid. No amount of automated testing can obviate the need for good human QA. For some applications, automated testing can be such a pain to get right and maintain, that it's often more work than it's worth.


The quicker the feedback the better for anyone - read any fairly recent (last several decades) book on study skills and learning. It is largely the same principle as having trouble reading your own code that you wrote a few weeks ago.


To be honest, I think this is one of the few areas where Joel is off the mark.

In my experience, the "tester" is a useless position that usually only exists because of a deficiency in one of two areas.

The first reason you need testers is that your programmers can't - for whatever reason - turn out code that works. Either they don't sufficiently understand the problem scope, they're overworked, or they're just lazy and don't bother running through enough of their own quality controls before releasing changes.

The second area is that you aren't getting the right feedback from customers. Joel uses an example in his article:

With long-cycle shrinkwrap software, it can take a year or more to hear feedback from customers.

The problem is the cycle is too long. Fix that.

I find mostly this is due to more basic things, like separating client service from programming (hello feedback!), allowing salesmen to mollycoddle their clients, or not even bothering to ask or listen.

The "tester" is the response to the existence of either one of both of the symptom above. By their very existence, they are a sign that you are either working in a company that has placed itself in a less than ideal market (selling enterprise software, for example), is poorly led, or has leadership that doesn't listen.

All this thinking hearkens back to computer engineering as building a house; and analogy so flawed that even PG saw fit to write a book with a more apt one.

This has been my experience and I have yet to see a situation where it isn't true.


To me, your experience sounds an awful lot like not having much experience in a production environment.

In my experience, context-switching between programming and testing gets quite cumbersome with difficult problems, and can lead to overlooking smaller issues. If I'm busy testing every single edge case I can think of, who's doing the development? With separate testers, you get the benefit of concurrency (the software can continue being tested while issues from the previous version are fixed), and subsequently more software developed.

"The problem is the cycle is too long. Fix that."

How so? What if I'm developing a new product that has a full specification and design document, and is expected to meet customer acceptance goals? I don't see a deficiency in the development cycle if I need someone else to help test my software in a manner that I simply don't have time for.

Does your assertion mean that any software that contains bugs is caused by lack of programmer skill? I, too, would like to learn the secret to writing completely issue-free code if you're willing to share it.

Now, please don't take what I say to mean programmers shouldn't test their own creations. If you hand something that crashes 100% of the time to a tester, you're being lazy. The purpose of testers goes far beyond simply checking to see if it's "code that works."


To me, your experience sounds an awful lot like not having much experience in a production environment.

Actually on the contrary, it comes from a growing understanding that the 'production environment' is in fact the problem!

In my experience, context-switching between programming and testing gets quite cumbersome with difficult problems, and can lead to overlooking smaller issues.

Whether or not a programmer or a designated tester is actually doing the work has no bearing on over-testing. In fact I'd argue that this is more likely to occur if you use a tester.

* If I'm busy testing every single edge case I can think of, who's doing the development?*

Nobody! That's the crux of the matter right there:

1. Testing sucks, so if someone else is doing it then it's better for you, and, more importantly

2. the pressure to continue to develop new features at speeds that are unsuistainable all but ensure that you'll eventually find yourself with so much technical debt that it will kill you.

What if I'm developing a new product that has a full specification and design document, and is expected to meet customer acceptance goals?

I'd say run away from this waterfall design process as fast as possible, to be frank.

I don't see a deficiency in the development cycle if I need someone else to help test my software in a manner that I simply don't have time for.

I do. I see a development cycle where you haven't been allocated enough time!

Does your assertion mean that any software that contains bugs is caused by lack of programmer skill?

No, because in order for a bug to be a bug it must not only exist in the code but also in the feature set. (i.e. it must be found) The fact of the matter is that nobody - not you, not your testers, and certainly not your customers knows exactly how the software will be used until it is actually being used.


"If I'm busy testing every single edge case I can think of, who's doing the development?"

The edge cases are the development. In my experience, the core functionality takes less than 10% of the time. A good programmer should be able to crank out an algorithm in a couple days; the problem comes when that algorithm meets end users and they request features that are completely insane from an engineering perspective, yet necessary for user acceptance.


The first reason you need testers is that your programmers can't - for whatever reason - turn out code that works. And it's pretty unreasonable to expect them to do so. They can run their own small set of test cases, but a full test of a significant program is a lot of work, which (as mentioned in another thread) includes a lot of hard-to-automate grinding. That grind work can be done much cheaper (and probably more reliably) by a dedicated tester.


Yes -- much more reliably.

The testers on my team save me so much time. My code runs on six platforms. Every time I switch platforms it takes a lot of time and energy to switch physical and mental contexts. I have no hope of thoroughly testing all the platforms for each little change I make. So I usually just compile the code on two or three platforms, test on one, and check it in. The incremental build servers will tell me if I broke the build within a few minutes, and the testers will tell me if I made some platform-specific breakage the next day.

Furthermore, for each bug, the tester has narrowed it to specific repro steps, including what DOESN'T work, and usually has a screenshot of the problem and what's expected. Usually by reading the report, I immediately have a good idea of what's broken and that saves me tons of time too (as opposed to a vague "the xyz component is broken, fix plz" which I have seen in lots of shops where non-testers write bug reports).

I love our testers.


That grind work can be done much cheaper (and probably more reliably) by a dedicated tester.

I won't dispute that, as that is a major reason why companies love testers.

The problem is that while the testing is done cheaper and perhaps better, that's not really the goal of the whole process now, is it?

The goal of the process is to match the product to the customers expectation/need. I submit that the best possible way to do this is to have programmers that are also managing that need, even if it is more expensive and/or less efficient when view specifically.


I think the best way is to have a specification coming from the customer indicating their needs and expectation. Any feature that's not mentioned in the document cannot be considered a "bug".

Based from this specification, testers/QA can not write their test cases.


You will never get that specification. Most people don't know what they need until you show it to them.

Specification-driven development is a great way to write software that gets you paid, gets some purchasing manager a bonus, and yet pisses off every end user who comes in contact with it.


> Either they don't sufficiently understand the problem scope, they're overworked, or they're just lazy and don't bother running through enough of their own quality controls before releasing changes.

The number of possible system states in larger systems can be enormous. I feel like to have this opinion, perhaps you have worked mainly on smaller systems? In larger systems, there can be so many possibilities that it would take several times longer to test than to code, and without reading spec docs, it is very difficult to tell what correct even is. For example, a student loan system I worked on fit this bill. The glib answer would be to say it was so complex because of poor programming, but that's not true. The legal reality it was modeling was complex and so the software could not be less complex than the underlying system's rules. I have worked on several such systems in different industries.

Now, you could argue that all programmers should become programming/QA people, but that would be self-defeating as programmer time is generally more expensive per-hour than QA time. Further, the type of person who is good at running through 400 steps of detail-oriented rote testing is not always the same person that you would like doing high-level design. Specialization makes life more efficient. Even if they did happen to be the same person, if they enjoyed one task more, why wouldn't they go to a job where they could focus on that type of task?


FWIW, I work on Google Search, possibly the largest software system on the planet, and Google is famous for being tester-light and automation-heavy. We rely much more on code reviews and on having good engineers who aren't rushed than on testers.

The reason is just as you state: beyond a certain point, the number of states in the system becomes so great that you simply can't test them all. Replacing developers with testers doesn't change that. So you're better off understanding what you're doing than trying to avoid all the possible ways that you could go wrong.


Automation heavy is a good thing in my book. I think whether the developer writes it is often a function of what you're testing. For example, WinRunner has its own learning curve, and so seems to be left to QA more often. Whereas Selenium seems to have a smaller curve.

Generally though, I like the idea of someone else proofreading my work, because for the same reason that I forgot to account for a set of inputs, I will tend to forget to test with those inputs.


I feel like to have this opinion, perhaps you have worked mainly on smaller systems?

No, I've got a good mix, and I agree with you that this is a product of large systems more than simple web apps and the like.

The complexity of large systems being required is what's the problem though; not that I have an immediate solution.

That and too many programmers that simply aren't able to do anything but program.


You can't say that complexity itself is the problem, because the whole of programming is encoding complex processes so as to simplify them from a human perspective. Without complexity, it's not testing that would go away, it's programming. You've basically said something like "the problem with space shuttles is that they have to aim their thrust precisely in order to achieve lift"—that's not the problem with space shuttles, that's the idea of space shuttles.


The problem is that everybody likes to think their problem is as complex as a space shuttle. The reality is that in 99% of cases, it isn't. In have of those you are 100% correct - the problem isn't even a programming one.


With respect, I suggest your experience is limited.

I deploy embedded safety-critical software into establishments such as nuclear naval bases. We need testers because while the code works (we use formal and semi-formal software derivation techniques) there are, on occasion, unexpected interactions between features. You need to run through combinations of configurations and actions, and some of it can't be scripted because the interactions are unexpected - the Black Swan principle.

Our cycle is a long one in part because acquisition programs in government and quasi-governmental organisations are like that. We can't deploy changes every day into the field. For one thing, we usually have access. For another, any change needs associated training and manuals.

It would be a waste of a talented programmer to run through the sorts of detailed and exhaustive testing our software has to pass before it will be accepted.

Perhaps you simply think that my company is placed in a less than ideal market. I assure you, it's worth being in the game.


Perhaps you simply think that my company is placed in a less than ideal market. I assure you, it's worth being in the game.

I do, only in the sense that in your market, your costs can justify your operations in a way that not many markets can match.

Joel makes bug trackers, so I stayed in that context although I spoke in very black and white terms. I'm not intending to profess the ultimate solution to everyone's process, but I am trying to speak to a majority of start-up centric developers-cum-managers.

Doing what you do... hell hire 25 testers for every component; just don't kill anyone.


"Doing what you do... hell hire 25 testers for every component; just don't kill anyone.

I think I miss this one but how did you know he needed 25 testers for every component?


Hyperbole.


The second area is that you aren't getting the right feedback from customers.

Yes, if you are bringing your customers into the office and actually watching them use your product.

Usability testing is and always has been one of the most lacking areas of web development (probably other types of consumer software as well, but web is my area of expertise). A good tester can address these issues up front more than anyone else in the organization. Everyone else is too close to project to give it a fresh look. It's no substitute for real usability testing, but it's a good deal closer than anything else internal.


I think more interesting than this actual article is Fog Creek's "QA and Test Specialist" job listing linked at the end: http://www.fogcreek.com/Jobs/QA.html

There are a bunch of small errors on the page that are the kind of things they'd expect a tester to catch and require fixing for. For example, "you've" is written with a backtick ("you`ve"), "valuable" is spelled with the letter L as a numeral one, "morale" is spelled "moral," etc. I wonder if in the phone interview they're going to bring these up. If they didn't notice the errors, they're probably not the right hire!


It probably depends on the type of company, but if there is a compliance/legal department, they're the ones looking for typos and stuff and even the marketing department. Specially when it's content (as opposed to a WinForms dialog box, for example) I don't think proofreading is part a tester's job, but correct me if I'm wrong.

Of course, the ones you mention are pretty blatant, so I would assume that it would benefit him to mention them since it shows great attention to detail and quality.


I think it's worth pointing out that with many companies, testers do as much (if not more!) programming as programmers do. I know in our company we have a metric that tracks what percentage of tests are automated, and guess who writes the automated tests? Testers. Granted it's a different kind of programming since it's Perl rather than C, but our testers crank out quite a bit of code.


I'd hate to be his HR person. The deluge of resumes is going to be overwhelming. Although, I don't agree with all that he says I applaud him for pointing out something companies often times miss: testers should test other people's code, not their own code.


If you want to get the top .1%, you need to evaluate 1000 people.


Probably a fair amount more than that, as there will be a group of people who drift from one job interview to the next, not getting hired at any strong rate anywhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: